This job view page is being replaced by Spyglass soon. Check out the new job view.
PRolemarkus: Enable IRSA for CCM
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-07-06 18:55
Elapsed57m43s
Revisionf2a02d66746b19a7b713af09d5759074b3628bd7
Refs 11818

No Test Failures!


Error lines from build-log.txt

... skipping 486 lines ...
I0706 18:59:28.425137    4265 dumplogs.go:38] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops toolbox dump --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0706 18:59:28.453832   11789 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0706 18:59:28.453945   11789 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0706 18:59:28.453951   11789 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
W0706 18:59:28.950561    4265 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0706 18:59:28.950773    4265 down.go:48] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops delete cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --yes
I0706 18:59:28.992060   11799 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0706 18:59:28.992151   11799 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0706 18:59:28.992156   11799 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
I0706 18:59:29.511441    4265 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/07/06 18:59:29 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0706 18:59:29.522532    4265 http.go:37] curl https://ip.jsb.workers.dev
I0706 18:59:29.663226    4265 up.go:144] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.0-beta.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210621 --channel=alpha --networking=kubenet --container-runtime=containerd --override=cluster.spec.cloudControllerManager.cloudProvider=aws --override=cluster.spec.serviceAccountIssuerDiscovery.discoveryStore=s3://k8s-kops-prow/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery --override=cluster.spec.serviceAccountIssuerDiscovery.enableAWSOIDCProvider=true --admin-access 34.123.129.137/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large
I0706 18:59:29.690939   11809 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0706 18:59:29.691073   11809 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0706 18:59:29.691079   11809 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
I0706 18:59:29.749933   11809 create_cluster.go:740] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 33 lines ...
I0706 18:59:52.433609    4265 up.go:181] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops validate cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0706 18:59:52.486973   11830 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0706 18:59:52.487158   11830 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0706 18:59:52.487177   11830 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
Validating cluster e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io

W0706 18:59:53.645800   11830 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:00:03.687698   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:00:13.724705   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:00:23.795236   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:00:33.825768   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:00:43.857082   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:00:53.883385   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:01:03.916911   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:01:13.952415   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:01:24.016954   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:01:34.055990   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:01:44.089574   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:01:54.160323   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:02:04.190515   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:02:14.235255   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:02:24.270101   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:02:34.306607   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:02:44.335939   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:02:54.384343   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:03:04.423001   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
W0706 19:03:14.453118   11830 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:03:24.487216   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 19:03:34.515700   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
W0706 19:03:44.570531   11830 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0706 19:04:24.615419   11830 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 203.0.113.123:443: i/o timeout
W0706 19:04:34.644906   11830 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
... skipping 20 lines ...
ip-172-20-61-241.ca-central-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-61-17.ca-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-61-17.ca-central-1.compute.internal" is pending

Validation Failed
W0706 19:04:56.868417   11830 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 388 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 143 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 477 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:23.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7347" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:26.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-6011" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:07:26.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Jul  6 19:07:26.824: INFO: found topology map[topology.kubernetes.io/zone:ca-central-1a]
Jul  6 19:07:26.824: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jul  6 19:07:26.824: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 52 lines ...
Jul  6 19:07:23.432: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
Jul  6 19:07:23.529: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-522ee3d6-412a-4e62-85ad-4201cdcd65bb" in namespace "security-context-test-8204" to be "Succeeded or Failed"
Jul  6 19:07:23.560: INFO: Pod "busybox-privileged-true-522ee3d6-412a-4e62-85ad-4201cdcd65bb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.531828ms
Jul  6 19:07:25.595: INFO: Pod "busybox-privileged-true-522ee3d6-412a-4e62-85ad-4201cdcd65bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066423185s
Jul  6 19:07:27.628: INFO: Pod "busybox-privileged-true-522ee3d6-412a-4e62-85ad-4201cdcd65bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098869574s
Jul  6 19:07:29.659: INFO: Pod "busybox-privileged-true-522ee3d6-412a-4e62-85ad-4201cdcd65bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13003931s
Jul  6 19:07:29.659: INFO: Pod "busybox-privileged-true-522ee3d6-412a-4e62-85ad-4201cdcd65bb" satisfied condition "Succeeded or Failed"
Jul  6 19:07:29.945: INFO: Got logs for pod "busybox-privileged-true-522ee3d6-412a-4e62-85ad-4201cdcd65bb": ""
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:29.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8204" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:30.055: INFO: Only supported for providers [azure] (not aws)
... skipping 23 lines ...
W0706 19:07:23.473230   12486 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul  6 19:07:23.473: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  6 19:07:23.567: INFO: Waiting up to 5m0s for pod "pod-0a3a0c03-1027-4762-ac3b-118a9fd95073" in namespace "emptydir-567" to be "Succeeded or Failed"
Jul  6 19:07:23.599: INFO: Pod "pod-0a3a0c03-1027-4762-ac3b-118a9fd95073": Phase="Pending", Reason="", readiness=false. Elapsed: 32.197959ms
Jul  6 19:07:25.631: INFO: Pod "pod-0a3a0c03-1027-4762-ac3b-118a9fd95073": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063509822s
Jul  6 19:07:27.662: INFO: Pod "pod-0a3a0c03-1027-4762-ac3b-118a9fd95073": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094525442s
Jul  6 19:07:29.694: INFO: Pod "pod-0a3a0c03-1027-4762-ac3b-118a9fd95073": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126271895s
STEP: Saw pod success
Jul  6 19:07:29.694: INFO: Pod "pod-0a3a0c03-1027-4762-ac3b-118a9fd95073" satisfied condition "Succeeded or Failed"
Jul  6 19:07:29.724: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-0a3a0c03-1027-4762-ac3b-118a9fd95073 container test-container: <nil>
STEP: delete the pod
Jul  6 19:07:30.017: INFO: Waiting for pod pod-0a3a0c03-1027-4762-ac3b-118a9fd95073 to disappear
Jul  6 19:07:30.048: INFO: Pod pod-0a3a0c03-1027-4762-ac3b-118a9fd95073 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.827 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Multi-AZ Cluster Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:07:23.784: INFO: Waiting up to 5m0s for pod "metadata-volume-87bf067b-8669-4e41-919a-85a80e480942" in namespace "downward-api-7973" to be "Succeeded or Failed"
Jul  6 19:07:23.815: INFO: Pod "metadata-volume-87bf067b-8669-4e41-919a-85a80e480942": Phase="Pending", Reason="", readiness=false. Elapsed: 31.006499ms
Jul  6 19:07:25.848: INFO: Pod "metadata-volume-87bf067b-8669-4e41-919a-85a80e480942": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063879204s
Jul  6 19:07:27.879: INFO: Pod "metadata-volume-87bf067b-8669-4e41-919a-85a80e480942": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095240659s
Jul  6 19:07:29.911: INFO: Pod "metadata-volume-87bf067b-8669-4e41-919a-85a80e480942": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127083674s
Jul  6 19:07:31.946: INFO: Pod "metadata-volume-87bf067b-8669-4e41-919a-85a80e480942": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162349367s
STEP: Saw pod success
Jul  6 19:07:31.946: INFO: Pod "metadata-volume-87bf067b-8669-4e41-919a-85a80e480942" satisfied condition "Succeeded or Failed"
Jul  6 19:07:31.977: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod metadata-volume-87bf067b-8669-4e41-919a-85a80e480942 container client-container: <nil>
STEP: delete the pod
Jul  6 19:07:32.048: INFO: Waiting for pod metadata-volume-87bf067b-8669-4e41-919a-85a80e480942 to disappear
Jul  6 19:07:32.079: INFO: Pod metadata-volume-87bf067b-8669-4e41-919a-85a80e480942 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.830 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:32.197: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
Jul  6 19:07:23.495: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-0e17fe6e-050a-4432-8f79-ee3451772178
STEP: Creating a pod to test consume secrets
Jul  6 19:07:23.633: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5b085167-41ba-4bf0-a8db-986fa504935d" in namespace "projected-5728" to be "Succeeded or Failed"
Jul  6 19:07:23.665: INFO: Pod "pod-projected-secrets-5b085167-41ba-4bf0-a8db-986fa504935d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.100186ms
Jul  6 19:07:25.696: INFO: Pod "pod-projected-secrets-5b085167-41ba-4bf0-a8db-986fa504935d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063457118s
Jul  6 19:07:27.727: INFO: Pod "pod-projected-secrets-5b085167-41ba-4bf0-a8db-986fa504935d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094021407s
Jul  6 19:07:29.758: INFO: Pod "pod-projected-secrets-5b085167-41ba-4bf0-a8db-986fa504935d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125014512s
Jul  6 19:07:31.790: INFO: Pod "pod-projected-secrets-5b085167-41ba-4bf0-a8db-986fa504935d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1573235s
Jul  6 19:07:33.821: INFO: Pod "pod-projected-secrets-5b085167-41ba-4bf0-a8db-986fa504935d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.187923258s
STEP: Saw pod success
Jul  6 19:07:33.821: INFO: Pod "pod-projected-secrets-5b085167-41ba-4bf0-a8db-986fa504935d" satisfied condition "Succeeded or Failed"
Jul  6 19:07:33.858: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-projected-secrets-5b085167-41ba-4bf0-a8db-986fa504935d container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul  6 19:07:33.931: INFO: Waiting for pod pod-projected-secrets-5b085167-41ba-4bf0-a8db-986fa504935d to disappear
Jul  6 19:07:33.962: INFO: Pod pod-projected-secrets-5b085167-41ba-4bf0-a8db-986fa504935d no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.768 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should not run without a specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":1,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
Jul  6 19:07:33.779: INFO: Creating a PV followed by a PVC
Jul  6 19:07:33.844: INFO: Waiting for PV local-pvv4xbs to bind to PVC pvc-tcpkw
Jul  6 19:07:33.844: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-tcpkw] to have phase Bound
Jul  6 19:07:33.895: INFO: PersistentVolumeClaim pvc-tcpkw found and phase=Bound (51.361535ms)
Jul  6 19:07:33.895: INFO: Waiting up to 3m0s for PersistentVolume local-pvv4xbs to have phase Bound
Jul  6 19:07:33.929: INFO: PersistentVolume local-pvv4xbs found and phase=Bound (33.889483ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
Jul  6 19:07:34.049: INFO: Waiting up to 5m0s for pod "pod-6bf514f0-f776-4f5d-a54c-53b0b78f233f" in namespace "persistent-local-volumes-test-6376" to be "Unschedulable"
Jul  6 19:07:34.088: INFO: Pod "pod-6bf514f0-f776-4f5d-a54c-53b0b78f233f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.897322ms
Jul  6 19:07:34.088: INFO: Pod "pod-6bf514f0-f776-4f5d-a54c-53b0b78f233f" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:11.195 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":1,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  6 19:07:25.672: INFO: Waiting up to 5m0s for pod "pod-1eab57d2-3896-4295-a6c5-10283ac9dafc" in namespace "emptydir-3057" to be "Succeeded or Failed"
Jul  6 19:07:25.702: INFO: Pod "pod-1eab57d2-3896-4295-a6c5-10283ac9dafc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.407434ms
Jul  6 19:07:27.733: INFO: Pod "pod-1eab57d2-3896-4295-a6c5-10283ac9dafc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061007983s
Jul  6 19:07:29.766: INFO: Pod "pod-1eab57d2-3896-4295-a6c5-10283ac9dafc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093901094s
Jul  6 19:07:31.796: INFO: Pod "pod-1eab57d2-3896-4295-a6c5-10283ac9dafc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124756265s
Jul  6 19:07:33.828: INFO: Pod "pod-1eab57d2-3896-4295-a6c5-10283ac9dafc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156341444s
Jul  6 19:07:35.859: INFO: Pod "pod-1eab57d2-3896-4295-a6c5-10283ac9dafc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.187383977s
STEP: Saw pod success
Jul  6 19:07:35.859: INFO: Pod "pod-1eab57d2-3896-4295-a6c5-10283ac9dafc" satisfied condition "Succeeded or Failed"
Jul  6 19:07:35.890: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-1eab57d2-3896-4295-a6c5-10283ac9dafc container test-container: <nil>
STEP: delete the pod
Jul  6 19:07:35.959: INFO: Waiting for pod pod-1eab57d2-3896-4295-a6c5-10283ac9dafc to disappear
Jul  6 19:07:35.989: INFO: Pod pod-1eab57d2-3896-4295-a6c5-10283ac9dafc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 18 lines ...
Jul  6 19:07:25.731: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-9e54bd22-a9d0-4ae1-b71c-dc7e20e23a1d
STEP: Creating a pod to test consume configMaps
Jul  6 19:07:25.870: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e59ff8b-89b3-4ba4-93e5-7b0051ab1858" in namespace "configmap-8507" to be "Succeeded or Failed"
Jul  6 19:07:25.901: INFO: Pod "pod-configmaps-1e59ff8b-89b3-4ba4-93e5-7b0051ab1858": Phase="Pending", Reason="", readiness=false. Elapsed: 30.502918ms
Jul  6 19:07:27.932: INFO: Pod "pod-configmaps-1e59ff8b-89b3-4ba4-93e5-7b0051ab1858": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06201129s
Jul  6 19:07:29.963: INFO: Pod "pod-configmaps-1e59ff8b-89b3-4ba4-93e5-7b0051ab1858": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092979472s
Jul  6 19:07:31.998: INFO: Pod "pod-configmaps-1e59ff8b-89b3-4ba4-93e5-7b0051ab1858": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127780056s
Jul  6 19:07:34.045: INFO: Pod "pod-configmaps-1e59ff8b-89b3-4ba4-93e5-7b0051ab1858": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174879039s
Jul  6 19:07:36.082: INFO: Pod "pod-configmaps-1e59ff8b-89b3-4ba4-93e5-7b0051ab1858": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.211563904s
STEP: Saw pod success
Jul  6 19:07:36.082: INFO: Pod "pod-configmaps-1e59ff8b-89b3-4ba4-93e5-7b0051ab1858" satisfied condition "Succeeded or Failed"
Jul  6 19:07:36.113: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-configmaps-1e59ff8b-89b3-4ba4-93e5-7b0051ab1858 container agnhost-container: <nil>
STEP: delete the pod
Jul  6 19:07:36.193: INFO: Waiting for pod pod-configmaps-1e59ff8b-89b3-4ba4-93e5-7b0051ab1858 to disappear
Jul  6 19:07:36.229: INFO: Pod pod-configmaps-1e59ff8b-89b3-4ba4-93e5-7b0051ab1858 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.899 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":1,"skipped":19,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:37.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6899" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:37.665: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 95 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:38.361: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:39.275: INFO: Only supported for providers [azure] (not aws)
... skipping 14 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:07:23.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 74 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:39.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-2526" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:07:39.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[It] should ignore not found error with --for=delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1836
STEP: calling kubectl wait --for=delete
Jul  6 19:07:39.954: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6283 wait --for=delete pod/doesnotexist'
Jul  6 19:07:40.147: INFO: stderr: ""
Jul  6 19:07:40.147: INFO: stdout: ""
Jul  6 19:07:40.147: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6283 wait --for=delete pod --selector=app.kubernetes.io/name=noexist'
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:40.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6283" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete","total":-1,"completed":3,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:40.441: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:40.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3858" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:07:37.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2df6f7a-5943-49b3-aece-ea6ab0698809" in namespace "downward-api-4744" to be "Succeeded or Failed"
Jul  6 19:07:37.919: INFO: Pod "downwardapi-volume-f2df6f7a-5943-49b3-aece-ea6ab0698809": Phase="Pending", Reason="", readiness=false. Elapsed: 31.546601ms
Jul  6 19:07:39.950: INFO: Pod "downwardapi-volume-f2df6f7a-5943-49b3-aece-ea6ab0698809": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062659806s
Jul  6 19:07:41.983: INFO: Pod "downwardapi-volume-f2df6f7a-5943-49b3-aece-ea6ab0698809": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094921676s
STEP: Saw pod success
Jul  6 19:07:41.983: INFO: Pod "downwardapi-volume-f2df6f7a-5943-49b3-aece-ea6ab0698809" satisfied condition "Succeeded or Failed"
Jul  6 19:07:42.013: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod downwardapi-volume-f2df6f7a-5943-49b3-aece-ea6ab0698809 container client-container: <nil>
STEP: delete the pod
Jul  6 19:07:42.080: INFO: Waiting for pod downwardapi-volume-f2df6f7a-5943-49b3-aece-ea6ab0698809 to disappear
Jul  6 19:07:42.111: INFO: Pod downwardapi-volume-f2df6f7a-5943-49b3-aece-ea6ab0698809 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:42.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4744" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:42.190: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 107 lines ...
• [SLOW TEST:20.649 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:21.309 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:52
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:46.231: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:48.446: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
• [SLOW TEST:10.444 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 72 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":13,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:07:43.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jul  6 19:07:44.037: INFO: Waiting up to 5m0s for pod "security-context-76e74338-209b-46f0-9909-3f9e95eea268" in namespace "security-context-1882" to be "Succeeded or Failed"
Jul  6 19:07:44.067: INFO: Pod "security-context-76e74338-209b-46f0-9909-3f9e95eea268": Phase="Pending", Reason="", readiness=false. Elapsed: 30.10947ms
Jul  6 19:07:46.099: INFO: Pod "security-context-76e74338-209b-46f0-9909-3f9e95eea268": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061886495s
Jul  6 19:07:48.130: INFO: Pod "security-context-76e74338-209b-46f0-9909-3f9e95eea268": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093350396s
Jul  6 19:07:50.162: INFO: Pod "security-context-76e74338-209b-46f0-9909-3f9e95eea268": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124707232s
Jul  6 19:07:52.193: INFO: Pod "security-context-76e74338-209b-46f0-9909-3f9e95eea268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156450339s
STEP: Saw pod success
Jul  6 19:07:52.193: INFO: Pod "security-context-76e74338-209b-46f0-9909-3f9e95eea268" satisfied condition "Succeeded or Failed"
Jul  6 19:07:52.224: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod security-context-76e74338-209b-46f0-9909-3f9e95eea268 container test-container: <nil>
STEP: delete the pod
Jul  6 19:07:52.291: INFO: Waiting for pod security-context-76e74338-209b-46f0-9909-3f9e95eea268 to disappear
Jul  6 19:07:52.321: INFO: Pod security-context-76e74338-209b-46f0-9909-3f9e95eea268 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.532 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:52.392: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:52.427: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 44 lines ...
• [SLOW TEST:29.654 seconds]
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  graceful pod terminated should wait until preStop hook completes the process
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170
------------------------------
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:53.031: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:54.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7827" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:07:51.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70ba1164-4c37-4319-860c-8aefd8761ff8" in namespace "downward-api-9459" to be "Succeeded or Failed"
Jul  6 19:07:51.161: INFO: Pod "downwardapi-volume-70ba1164-4c37-4319-860c-8aefd8761ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.14453ms
Jul  6 19:07:53.195: INFO: Pod "downwardapi-volume-70ba1164-4c37-4319-860c-8aefd8761ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064680238s
Jul  6 19:07:55.227: INFO: Pod "downwardapi-volume-70ba1164-4c37-4319-860c-8aefd8761ff8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097123417s
STEP: Saw pod success
Jul  6 19:07:55.228: INFO: Pod "downwardapi-volume-70ba1164-4c37-4319-860c-8aefd8761ff8" satisfied condition "Succeeded or Failed"
Jul  6 19:07:55.260: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod downwardapi-volume-70ba1164-4c37-4319-860c-8aefd8761ff8 container client-container: <nil>
STEP: delete the pod
Jul  6 19:07:55.334: INFO: Waiting for pod downwardapi-volume-70ba1164-4c37-4319-860c-8aefd8761ff8 to disappear
Jul  6 19:07:55.366: INFO: Pod downwardapi-volume-70ba1164-4c37-4319-860c-8aefd8761ff8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:55.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9459" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:07:51.468: INFO: Waiting up to 5m0s for pod "metadata-volume-4b20d7ea-96f0-4452-b2e8-e472d6a5a62b" in namespace "downward-api-4919" to be "Succeeded or Failed"
Jul  6 19:07:51.499: INFO: Pod "metadata-volume-4b20d7ea-96f0-4452-b2e8-e472d6a5a62b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.264647ms
Jul  6 19:07:53.533: INFO: Pod "metadata-volume-4b20d7ea-96f0-4452-b2e8-e472d6a5a62b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06498283s
Jul  6 19:07:55.573: INFO: Pod "metadata-volume-4b20d7ea-96f0-4452-b2e8-e472d6a5a62b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105045418s
STEP: Saw pod success
Jul  6 19:07:55.573: INFO: Pod "metadata-volume-4b20d7ea-96f0-4452-b2e8-e472d6a5a62b" satisfied condition "Succeeded or Failed"
Jul  6 19:07:55.605: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod metadata-volume-4b20d7ea-96f0-4452-b2e8-e472d6a5a62b container client-container: <nil>
STEP: delete the pod
Jul  6 19:07:55.676: INFO: Waiting for pod metadata-volume-4b20d7ea-96f0-4452-b2e8-e472d6a5a62b to disappear
Jul  6 19:07:55.707: INFO: Pod metadata-volume-4b20d7ea-96f0-4452-b2e8-e472d6a5a62b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:55.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4919" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":14,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:55.820: INFO: Only supported for providers [gce gke] (not aws)
... skipping 73 lines ...
Jul  6 19:07:42.211: INFO: PersistentVolumeClaim pvc-29mb2 found but phase is Pending instead of Bound.
Jul  6 19:07:44.243: INFO: PersistentVolumeClaim pvc-29mb2 found and phase=Bound (12.240659814s)
Jul  6 19:07:44.243: INFO: Waiting up to 3m0s for PersistentVolume local-cfxhk to have phase Bound
Jul  6 19:07:44.274: INFO: PersistentVolume local-cfxhk found and phase=Bound (30.840246ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-p7q4
STEP: Creating a pod to test subpath
Jul  6 19:07:44.367: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-p7q4" in namespace "provisioning-4410" to be "Succeeded or Failed"
Jul  6 19:07:44.398: INFO: Pod "pod-subpath-test-preprovisionedpv-p7q4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.666246ms
Jul  6 19:07:46.429: INFO: Pod "pod-subpath-test-preprovisionedpv-p7q4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061726922s
Jul  6 19:07:48.460: INFO: Pod "pod-subpath-test-preprovisionedpv-p7q4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093146749s
Jul  6 19:07:50.491: INFO: Pod "pod-subpath-test-preprovisionedpv-p7q4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12412007s
Jul  6 19:07:52.523: INFO: Pod "pod-subpath-test-preprovisionedpv-p7q4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155578534s
Jul  6 19:07:54.555: INFO: Pod "pod-subpath-test-preprovisionedpv-p7q4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.187622966s
STEP: Saw pod success
Jul  6 19:07:54.555: INFO: Pod "pod-subpath-test-preprovisionedpv-p7q4" satisfied condition "Succeeded or Failed"
Jul  6 19:07:54.585: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-p7q4 container test-container-subpath-preprovisionedpv-p7q4: <nil>
STEP: delete the pod
Jul  6 19:07:54.653: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-p7q4 to disappear
Jul  6 19:07:54.683: INFO: Pod pod-subpath-test-preprovisionedpv-p7q4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-p7q4
Jul  6 19:07:54.683: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-p7q4" in namespace "provisioning-4410"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:56.029: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 90 lines ...
Jul  6 19:07:42.315: INFO: PersistentVolumeClaim pvc-6mntf found but phase is Pending instead of Bound.
Jul  6 19:07:44.351: INFO: PersistentVolumeClaim pvc-6mntf found and phase=Bound (6.139718806s)
Jul  6 19:07:44.351: INFO: Waiting up to 3m0s for PersistentVolume local-mxrb9 to have phase Bound
Jul  6 19:07:44.385: INFO: PersistentVolume local-mxrb9 found and phase=Bound (34.110609ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-65p8
STEP: Creating a pod to test subpath
Jul  6 19:07:44.484: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-65p8" in namespace "provisioning-7050" to be "Succeeded or Failed"
Jul  6 19:07:44.516: INFO: Pod "pod-subpath-test-preprovisionedpv-65p8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.858716ms
Jul  6 19:07:46.548: INFO: Pod "pod-subpath-test-preprovisionedpv-65p8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063896698s
Jul  6 19:07:48.581: INFO: Pod "pod-subpath-test-preprovisionedpv-65p8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097253507s
Jul  6 19:07:50.614: INFO: Pod "pod-subpath-test-preprovisionedpv-65p8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129408213s
Jul  6 19:07:52.649: INFO: Pod "pod-subpath-test-preprovisionedpv-65p8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164746702s
Jul  6 19:07:54.681: INFO: Pod "pod-subpath-test-preprovisionedpv-65p8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.197340186s
STEP: Saw pod success
Jul  6 19:07:54.682: INFO: Pod "pod-subpath-test-preprovisionedpv-65p8" satisfied condition "Succeeded or Failed"
Jul  6 19:07:54.713: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-65p8 container test-container-subpath-preprovisionedpv-65p8: <nil>
STEP: delete the pod
Jul  6 19:07:54.789: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-65p8 to disappear
Jul  6 19:07:54.821: INFO: Pod pod-subpath-test-preprovisionedpv-65p8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-65p8
Jul  6 19:07:54.821: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-65p8" in namespace "provisioning-7050"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:56.216: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
W0706 19:07:23.473693   12544 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul  6 19:07:23.473: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul  6 19:07:23.536: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 19:07:23.644: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7682" in namespace "provisioning-7682" to be "Succeeded or Failed"
Jul  6 19:07:23.677: INFO: Pod "hostpath-symlink-prep-provisioning-7682": Phase="Pending", Reason="", readiness=false. Elapsed: 32.780942ms
Jul  6 19:07:25.709: INFO: Pod "hostpath-symlink-prep-provisioning-7682": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064977906s
STEP: Saw pod success
Jul  6 19:07:25.710: INFO: Pod "hostpath-symlink-prep-provisioning-7682" satisfied condition "Succeeded or Failed"
Jul  6 19:07:25.710: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7682" in namespace "provisioning-7682"
Jul  6 19:07:25.748: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7682" to be fully deleted
Jul  6 19:07:25.781: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-xltk
STEP: Creating a pod to test atomic-volume-subpath
Jul  6 19:07:25.814: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-xltk" in namespace "provisioning-7682" to be "Succeeded or Failed"
Jul  6 19:07:25.848: INFO: Pod "pod-subpath-test-inlinevolume-xltk": Phase="Pending", Reason="", readiness=false. Elapsed: 34.477753ms
Jul  6 19:07:27.880: INFO: Pod "pod-subpath-test-inlinevolume-xltk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066680571s
Jul  6 19:07:29.912: INFO: Pod "pod-subpath-test-inlinevolume-xltk": Phase="Running", Reason="", readiness=true. Elapsed: 4.098553125s
Jul  6 19:07:31.947: INFO: Pod "pod-subpath-test-inlinevolume-xltk": Phase="Running", Reason="", readiness=true. Elapsed: 6.133094518s
Jul  6 19:07:33.981: INFO: Pod "pod-subpath-test-inlinevolume-xltk": Phase="Running", Reason="", readiness=true. Elapsed: 8.167575749s
Jul  6 19:07:36.014: INFO: Pod "pod-subpath-test-inlinevolume-xltk": Phase="Running", Reason="", readiness=true. Elapsed: 10.200605698s
... skipping 2 lines ...
Jul  6 19:07:42.112: INFO: Pod "pod-subpath-test-inlinevolume-xltk": Phase="Running", Reason="", readiness=true. Elapsed: 16.297771525s
Jul  6 19:07:44.144: INFO: Pod "pod-subpath-test-inlinevolume-xltk": Phase="Running", Reason="", readiness=true. Elapsed: 18.330303576s
Jul  6 19:07:46.176: INFO: Pod "pod-subpath-test-inlinevolume-xltk": Phase="Running", Reason="", readiness=true. Elapsed: 20.362503308s
Jul  6 19:07:48.209: INFO: Pod "pod-subpath-test-inlinevolume-xltk": Phase="Running", Reason="", readiness=true. Elapsed: 22.39497827s
Jul  6 19:07:50.246: INFO: Pod "pod-subpath-test-inlinevolume-xltk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.4324654s
STEP: Saw pod success
Jul  6 19:07:50.246: INFO: Pod "pod-subpath-test-inlinevolume-xltk" satisfied condition "Succeeded or Failed"
Jul  6 19:07:50.278: INFO: Trying to get logs from node ip-172-20-61-241.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-xltk container test-container-subpath-inlinevolume-xltk: <nil>
STEP: delete the pod
Jul  6 19:07:50.357: INFO: Waiting for pod pod-subpath-test-inlinevolume-xltk to disappear
Jul  6 19:07:50.388: INFO: Pod pod-subpath-test-inlinevolume-xltk no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-xltk
Jul  6 19:07:50.388: INFO: Deleting pod "pod-subpath-test-inlinevolume-xltk" in namespace "provisioning-7682"
STEP: Deleting pod
Jul  6 19:07:50.419: INFO: Deleting pod "pod-subpath-test-inlinevolume-xltk" in namespace "provisioning-7682"
Jul  6 19:07:50.483: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7682" in namespace "provisioning-7682" to be "Succeeded or Failed"
Jul  6 19:07:50.514: INFO: Pod "hostpath-symlink-prep-provisioning-7682": Phase="Pending", Reason="", readiness=false. Elapsed: 31.190136ms
Jul  6 19:07:52.545: INFO: Pod "hostpath-symlink-prep-provisioning-7682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062463319s
Jul  6 19:07:54.579: INFO: Pod "hostpath-symlink-prep-provisioning-7682": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095965289s
Jul  6 19:07:56.616: INFO: Pod "hostpath-symlink-prep-provisioning-7682": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132802092s
STEP: Saw pod success
Jul  6 19:07:56.616: INFO: Pod "hostpath-symlink-prep-provisioning-7682" satisfied condition "Succeeded or Failed"
Jul  6 19:07:56.616: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7682" in namespace "provisioning-7682"
Jul  6 19:07:56.656: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7682" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:56.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7682" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:56.837: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
STEP: Creating configMap with name projected-configmap-test-volume-d12e4579-f744-4497-9b42-ed33a35e4875
STEP: Creating a pod to test consume configMaps
Jul  6 19:07:52.666: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9e5d7147-47d7-4b35-b773-ae530eaeb9f1" in namespace "projected-6365" to be "Succeeded or Failed"
Jul  6 19:07:52.696: INFO: Pod "pod-projected-configmaps-9e5d7147-47d7-4b35-b773-ae530eaeb9f1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.385517ms
Jul  6 19:07:54.727: INFO: Pod "pod-projected-configmaps-9e5d7147-47d7-4b35-b773-ae530eaeb9f1": Phase="Running", Reason="", readiness=true. Elapsed: 2.0608039s
Jul  6 19:07:56.758: INFO: Pod "pod-projected-configmaps-9e5d7147-47d7-4b35-b773-ae530eaeb9f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092060021s
STEP: Saw pod success
Jul  6 19:07:56.758: INFO: Pod "pod-projected-configmaps-9e5d7147-47d7-4b35-b773-ae530eaeb9f1" satisfied condition "Succeeded or Failed"
Jul  6 19:07:56.789: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-projected-configmaps-9e5d7147-47d7-4b35-b773-ae530eaeb9f1 container agnhost-container: <nil>
STEP: delete the pod
Jul  6 19:07:56.857: INFO: Waiting for pod pod-projected-configmaps-9e5d7147-47d7-4b35-b773-ae530eaeb9f1 to disappear
Jul  6 19:07:56.895: INFO: Pod pod-projected-configmaps-9e5d7147-47d7-4b35-b773-ae530eaeb9f1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:07:56.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6365" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":17,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:992
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:993
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":3,"skipped":16,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:58.128: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
... skipping 66 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:58.194: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 276 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":19,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:07:59.593: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 129 lines ...
Jul  6 19:07:31.644: INFO: PersistentVolumeClaim csi-hostpath2k2f2 found but phase is Pending instead of Bound.
Jul  6 19:07:33.677: INFO: PersistentVolumeClaim csi-hostpath2k2f2 found but phase is Pending instead of Bound.
Jul  6 19:07:35.709: INFO: PersistentVolumeClaim csi-hostpath2k2f2 found but phase is Pending instead of Bound.
Jul  6 19:07:37.743: INFO: PersistentVolumeClaim csi-hostpath2k2f2 found and phase=Bound (10.197686846s)
STEP: Creating pod pod-subpath-test-dynamicpv-7t82
STEP: Creating a pod to test subpath
Jul  6 19:07:37.839: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-7t82" in namespace "provisioning-1848" to be "Succeeded or Failed"
Jul  6 19:07:37.872: INFO: Pod "pod-subpath-test-dynamicpv-7t82": Phase="Pending", Reason="", readiness=false. Elapsed: 33.31395ms
Jul  6 19:07:39.904: INFO: Pod "pod-subpath-test-dynamicpv-7t82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065469093s
Jul  6 19:07:41.937: INFO: Pod "pod-subpath-test-dynamicpv-7t82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09786439s
Jul  6 19:07:43.969: INFO: Pod "pod-subpath-test-dynamicpv-7t82": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130038789s
Jul  6 19:07:46.002: INFO: Pod "pod-subpath-test-dynamicpv-7t82": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163376858s
Jul  6 19:07:48.034: INFO: Pod "pod-subpath-test-dynamicpv-7t82": Phase="Pending", Reason="", readiness=false. Elapsed: 10.195312562s
Jul  6 19:07:50.066: INFO: Pod "pod-subpath-test-dynamicpv-7t82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.227579903s
STEP: Saw pod success
Jul  6 19:07:50.066: INFO: Pod "pod-subpath-test-dynamicpv-7t82" satisfied condition "Succeeded or Failed"
Jul  6 19:07:50.098: INFO: Trying to get logs from node ip-172-20-61-241.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-7t82 container test-container-subpath-dynamicpv-7t82: <nil>
STEP: delete the pod
Jul  6 19:07:50.181: INFO: Waiting for pod pod-subpath-test-dynamicpv-7t82 to disappear
Jul  6 19:07:50.213: INFO: Pod pod-subpath-test-dynamicpv-7t82 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-7t82
Jul  6 19:07:50.213: INFO: Deleting pod "pod-subpath-test-dynamicpv-7t82" in namespace "provisioning-1848"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:04.022: INFO: Only supported for providers [azure] (not aws)
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:07:56.421: INFO: Waiting up to 5m0s for pod "metadata-volume-d572dadd-bee6-4691-bb11-79f099e3d6c7" in namespace "projected-8209" to be "Succeeded or Failed"
Jul  6 19:07:56.457: INFO: Pod "metadata-volume-d572dadd-bee6-4691-bb11-79f099e3d6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 35.631696ms
Jul  6 19:07:58.490: INFO: Pod "metadata-volume-d572dadd-bee6-4691-bb11-79f099e3d6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068286357s
Jul  6 19:08:00.522: INFO: Pod "metadata-volume-d572dadd-bee6-4691-bb11-79f099e3d6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100393669s
Jul  6 19:08:02.554: INFO: Pod "metadata-volume-d572dadd-bee6-4691-bb11-79f099e3d6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132407266s
Jul  6 19:08:04.637: INFO: Pod "metadata-volume-d572dadd-bee6-4691-bb11-79f099e3d6c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.215858183s
STEP: Saw pod success
Jul  6 19:08:04.637: INFO: Pod "metadata-volume-d572dadd-bee6-4691-bb11-79f099e3d6c7" satisfied condition "Succeeded or Failed"
Jul  6 19:08:04.672: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod metadata-volume-d572dadd-bee6-4691-bb11-79f099e3d6c7 container client-container: <nil>
STEP: delete the pod
Jul  6 19:08:04.794: INFO: Waiting for pod metadata-volume-d572dadd-bee6-4691-bb11-79f099e3d6c7 to disappear
Jul  6 19:08:04.826: INFO: Pod metadata-volume-d572dadd-bee6-4691-bb11-79f099e3d6c7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.674 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:04.908: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 106 lines ...
• [SLOW TEST:38.907 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a failing command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:505
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command","total":-1,"completed":3,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:08:09.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-7280" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":4,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:13.096: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 214 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:14.443: INFO: Only supported for providers [gce gke] (not aws)
... skipping 109 lines ...
STEP: Destroying namespace "services-4430" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":5,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:14.938: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 134 lines ...
• [SLOW TEST:20.649 seconds]
[sig-api-machinery] Servers with support for API chunking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:15.409: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 137 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-49b10679-17d1-4560-a2db-180afce04943
STEP: Creating a pod to test consume secrets
Jul  6 19:08:15.192: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-469820f0-5b34-42fe-901b-610ac0b8c2c2" in namespace "projected-72" to be "Succeeded or Failed"
Jul  6 19:08:15.222: INFO: Pod "pod-projected-secrets-469820f0-5b34-42fe-901b-610ac0b8c2c2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.293508ms
Jul  6 19:08:17.254: INFO: Pod "pod-projected-secrets-469820f0-5b34-42fe-901b-610ac0b8c2c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06203037s
STEP: Saw pod success
Jul  6 19:08:17.254: INFO: Pod "pod-projected-secrets-469820f0-5b34-42fe-901b-610ac0b8c2c2" satisfied condition "Succeeded or Failed"
Jul  6 19:08:17.284: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-projected-secrets-469820f0-5b34-42fe-901b-610ac0b8c2c2 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul  6 19:08:17.356: INFO: Waiting for pod pod-projected-secrets-469820f0-5b34-42fe-901b-610ac0b8c2c2 to disappear
Jul  6 19:08:17.386: INFO: Pod pod-projected-secrets-469820f0-5b34-42fe-901b-610ac0b8c2c2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:08:17.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-72" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:19.672: INFO: Only supported for providers [azure] (not aws)
... skipping 43 lines ...
Jul  6 19:08:12.106: INFO: PersistentVolumeClaim pvc-cww9f found but phase is Pending instead of Bound.
Jul  6 19:08:14.140: INFO: PersistentVolumeClaim pvc-cww9f found and phase=Bound (12.221924465s)
Jul  6 19:08:14.140: INFO: Waiting up to 3m0s for PersistentVolume local-rxbhs to have phase Bound
Jul  6 19:08:14.172: INFO: PersistentVolume local-rxbhs found and phase=Bound (31.947744ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-m9vw
STEP: Creating a pod to test exec-volume-test
Jul  6 19:08:14.268: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-m9vw" in namespace "volume-6902" to be "Succeeded or Failed"
Jul  6 19:08:14.298: INFO: Pod "exec-volume-test-preprovisionedpv-m9vw": Phase="Pending", Reason="", readiness=false. Elapsed: 30.138044ms
Jul  6 19:08:16.330: INFO: Pod "exec-volume-test-preprovisionedpv-m9vw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062218671s
Jul  6 19:08:18.362: INFO: Pod "exec-volume-test-preprovisionedpv-m9vw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093740417s
Jul  6 19:08:20.393: INFO: Pod "exec-volume-test-preprovisionedpv-m9vw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124767011s
STEP: Saw pod success
Jul  6 19:08:20.393: INFO: Pod "exec-volume-test-preprovisionedpv-m9vw" satisfied condition "Succeeded or Failed"
Jul  6 19:08:20.427: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-m9vw container exec-container-preprovisionedpv-m9vw: <nil>
STEP: delete the pod
Jul  6 19:08:20.507: INFO: Waiting for pod exec-volume-test-preprovisionedpv-m9vw to disappear
Jul  6 19:08:20.537: INFO: Pod exec-volume-test-preprovisionedpv-m9vw no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-m9vw
Jul  6 19:08:20.538: INFO: Deleting pod "exec-volume-test-preprovisionedpv-m9vw" in namespace "volume-6902"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:08:13.462: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6809395b-9d73-493f-8d4f-08b243f64c78" in namespace "downward-api-4875" to be "Succeeded or Failed"
Jul  6 19:08:13.493: INFO: Pod "downwardapi-volume-6809395b-9d73-493f-8d4f-08b243f64c78": Phase="Pending", Reason="", readiness=false. Elapsed: 30.992023ms
Jul  6 19:08:15.532: INFO: Pod "downwardapi-volume-6809395b-9d73-493f-8d4f-08b243f64c78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069746358s
Jul  6 19:08:17.563: INFO: Pod "downwardapi-volume-6809395b-9d73-493f-8d4f-08b243f64c78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101396001s
Jul  6 19:08:19.598: INFO: Pod "downwardapi-volume-6809395b-9d73-493f-8d4f-08b243f64c78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13575561s
Jul  6 19:08:21.629: INFO: Pod "downwardapi-volume-6809395b-9d73-493f-8d4f-08b243f64c78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.167596838s
STEP: Saw pod success
Jul  6 19:08:21.630: INFO: Pod "downwardapi-volume-6809395b-9d73-493f-8d4f-08b243f64c78" satisfied condition "Succeeded or Failed"
Jul  6 19:08:21.662: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod downwardapi-volume-6809395b-9d73-493f-8d4f-08b243f64c78 container client-container: <nil>
STEP: delete the pod
Jul  6 19:08:21.732: INFO: Waiting for pod downwardapi-volume-6809395b-9d73-493f-8d4f-08b243f64c78 to disappear
Jul  6 19:08:21.763: INFO: Pod downwardapi-volume-6809395b-9d73-493f-8d4f-08b243f64c78 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.594 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:21.857: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 44 lines ...
Jul  6 19:08:15.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
STEP: Creating a pod to test downward api env vars
Jul  6 19:08:15.720: INFO: Waiting up to 5m0s for pod "downward-api-8089fe3b-d262-4546-9293-c5d39c4e8799" in namespace "downward-api-2175" to be "Succeeded or Failed"
Jul  6 19:08:15.752: INFO: Pod "downward-api-8089fe3b-d262-4546-9293-c5d39c4e8799": Phase="Pending", Reason="", readiness=false. Elapsed: 32.143884ms
Jul  6 19:08:17.784: INFO: Pod "downward-api-8089fe3b-d262-4546-9293-c5d39c4e8799": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063935145s
Jul  6 19:08:19.816: INFO: Pod "downward-api-8089fe3b-d262-4546-9293-c5d39c4e8799": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095609082s
Jul  6 19:08:21.848: INFO: Pod "downward-api-8089fe3b-d262-4546-9293-c5d39c4e8799": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127606439s
STEP: Saw pod success
Jul  6 19:08:21.848: INFO: Pod "downward-api-8089fe3b-d262-4546-9293-c5d39c4e8799" satisfied condition "Succeeded or Failed"
Jul  6 19:08:21.879: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod downward-api-8089fe3b-d262-4546-9293-c5d39c4e8799 container dapi-container: <nil>
STEP: delete the pod
Jul  6 19:08:21.947: INFO: Waiting for pod downward-api-8089fe3b-d262-4546-9293-c5d39c4e8799 to disappear
Jul  6 19:08:21.978: INFO: Pod downward-api-8089fe3b-d262-4546-9293-c5d39c4e8799 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.513 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":3,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:22.061: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:08:22.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8231" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:22.216: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 56 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:07:39.665: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Jul  6 19:07:56.561: INFO: PersistentVolumeClaim pvc-q6hfj found but phase is Pending instead of Bound.
Jul  6 19:07:58.604: INFO: PersistentVolumeClaim pvc-q6hfj found and phase=Bound (12.239144689s)
Jul  6 19:07:58.604: INFO: Waiting up to 3m0s for PersistentVolume local-jkxw8 to have phase Bound
Jul  6 19:07:58.636: INFO: PersistentVolume local-jkxw8 found and phase=Bound (31.295163ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9wd5
STEP: Creating a pod to test atomic-volume-subpath
Jul  6 19:07:58.733: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9wd5" in namespace "provisioning-9576" to be "Succeeded or Failed"
Jul  6 19:07:58.764: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.918849ms
Jul  6 19:08:00.797: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063362108s
Jul  6 19:08:02.828: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094212856s
Jul  6 19:08:04.863: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129424342s
Jul  6 19:08:06.894: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5": Phase="Running", Reason="", readiness=true. Elapsed: 8.160345314s
Jul  6 19:08:08.925: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5": Phase="Running", Reason="", readiness=true. Elapsed: 10.191640628s
... skipping 2 lines ...
Jul  6 19:08:15.021: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5": Phase="Running", Reason="", readiness=true. Elapsed: 16.287392466s
Jul  6 19:08:17.056: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5": Phase="Running", Reason="", readiness=true. Elapsed: 18.322891005s
Jul  6 19:08:19.090: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5": Phase="Running", Reason="", readiness=true. Elapsed: 20.356788895s
Jul  6 19:08:21.122: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5": Phase="Running", Reason="", readiness=true. Elapsed: 22.389037876s
Jul  6 19:08:23.155: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.421267462s
STEP: Saw pod success
Jul  6 19:08:23.155: INFO: Pod "pod-subpath-test-preprovisionedpv-9wd5" satisfied condition "Succeeded or Failed"
Jul  6 19:08:23.190: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-9wd5 container test-container-subpath-preprovisionedpv-9wd5: <nil>
STEP: delete the pod
Jul  6 19:08:23.260: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9wd5 to disappear
Jul  6 19:08:23.291: INFO: Pod pod-subpath-test-preprovisionedpv-9wd5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9wd5
Jul  6 19:08:23.291: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9wd5" in namespace "provisioning-9576"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":1,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:23.833: INFO: Only supported for providers [openstack] (not aws)
... skipping 187 lines ...
Jul  6 19:08:22.509: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  6 19:08:22.509: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1926 describe pod agnhost-primary-7kf5k'
Jul  6 19:08:22.751: INFO: stderr: ""
Jul  6 19:08:22.751: INFO: stdout: "Name:         agnhost-primary-7kf5k\nNamespace:    kubectl-1926\nPriority:     0\nNode:         ip-172-20-61-17.ca-central-1.compute.internal/172.20.61.17\nStart Time:   Tue, 06 Jul 2021 19:08:15 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.4.23\nIPs:\n  IP:           100.96.4.23\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://b3163c5f4468e8a443a101d6810d73ba46103e7896f6a2aafc0879bb256bf76e\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 06 Jul 2021 19:08:16 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t86rb (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-t86rb:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  7s    default-scheduler  Successfully assigned kubectl-1926/agnhost-primary-7kf5k to ip-172-20-61-17.ca-central-1.compute.internal\n  Normal  Pulled     6s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    6s    kubelet            Created container agnhost-primary\n  Normal  Started    6s    kubelet            Started container agnhost-primary\n"
Jul  6 19:08:22.751: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1926 describe rc agnhost-primary'
Jul  6 19:08:23.020: INFO: stderr: ""
Jul  6 19:08:23.020: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-1926\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: agnhost-primary-7kf5k\n"
Jul  6 19:08:23.020: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1926 describe service agnhost-primary'
Jul  6 19:08:23.278: INFO: stderr: ""
Jul  6 19:08:23.278: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-1926\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.64.162.241\nIPs:               100.64.162.241\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.4.23:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jul  6 19:08:23.326: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1926 describe node ip-172-20-44-51.ca-central-1.compute.internal'
Jul  6 19:08:23.723: INFO: stderr: ""
Jul  6 19:08:23.723: INFO: stdout: "Name:               ip-172-20-44-51.ca-central-1.compute.internal\nRoles:              control-plane,master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=c5.large\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=ca-central-1\n                    failure-domain.beta.kubernetes.io/zone=ca-central-1a\n                    kops.k8s.io/instancegroup=master-ca-central-1a\n                    kops.k8s.io/kops-controller-pki=\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-44-51.ca-central-1.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=master\n                    node-role.kubernetes.io/control-plane=\n                    node-role.kubernetes.io/master=\n                    node.kubernetes.io/exclude-from-external-load-balancers=\n                    node.kubernetes.io/instance-type=c5.large\n                    topology.ebs.csi.aws.com/zone=ca-central-1a\n                    topology.kubernetes.io/region=ca-central-1\n                    topology.kubernetes.io/zone=ca-central-1a\nAnnotations:        csi.volume.kubernetes.io/nodeid: {\"ebs.csi.aws.com\":\"i-0199654295adf8c6d\"}\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Tue, 06 Jul 2021 19:02:11 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-44-51.ca-central-1.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Tue, 06 Jul 2021 19:08:15 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 06 Jul 2021 19:07:53 +0000   Tue, 06 Jul 2021 19:02:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 06 Jul 2021 19:07:53 +0000   Tue, 06 Jul 2021 19:02:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 06 Jul 2021 19:07:53 +0000   Tue, 06 Jul 2021 19:02:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 06 Jul 2021 19:07:53 +0000   Tue, 06 Jul 2021 19:02:27 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   172.20.44.51\n  ExternalIP:   35.182.118.89\n  InternalDNS:  ip-172-20-44-51.ca-central-1.compute.internal\n  Hostname:     ip-172-20-44-51.ca-central-1.compute.internal\n  ExternalDNS:  ec2-35-182-118-89.ca-central-1.compute.amazonaws.com\nCapacity:\n  cpu:                2\n  ephemeral-storage:  48725632Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3784324Ki\n  pods:               110\nAllocatable:\n  cpu:                2\n  ephemeral-storage:  44905542377\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3681924Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 ec223875553ea40f3fbc9895c36a2bec\n  System UUID:                ec223875-553e-a40f-3fbc-9895c36a2bec\n  Boot ID:                    e41c565f-38ae-46ea-a8db-f5a02c244dc2\n  Kernel Version:             5.8.0-1038-aws\n  OS Image:                   Ubuntu 20.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.6\n  Kubelet Version:            v1.22.0-beta.0\n  Kube-Proxy Version:         v1.22.0-beta.0\nPodCIDR:                      100.96.0.0/24\nPodCIDRs:                     100.96.0.0/24\nProviderID:                   aws:///ca-central-1a/i-0199654295adf8c6d\nNon-terminated Pods:          (10 in total)\n  Namespace                   Name                                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                                     ------------  ----------  ---------------  -------------  ---\n  kube-system                 aws-cloud-controller-manager-5n582                                       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m56s\n  kube-system                 dns-controller-5db8dc7c7-gf89c                                           50m (2%)      0 (0%)      50Mi (1%)        0 (0%)         5m56s\n  kube-system                 ebs-csi-node-2z69r                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s\n  kube-system                 etcd-manager-events-ip-172-20-44-51.ca-central-1.compute.internal        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m20s\n  kube-system                 etcd-manager-main-ip-172-20-44-51.ca-central-1.compute.internal          200m (10%)    0 (0%)      100Mi (2%)       0 (0%)         5m7s\n  kube-system                 kops-controller-vcf2v                                                    50m (2%)      0 (0%)      50Mi (1%)        0 (0%)         5m34s\n  kube-system                 kube-apiserver-ip-172-20-44-51.ca-central-1.compute.internal             150m (7%)     0 (0%)      0 (0%)           0 (0%)         4m58s\n  kube-system                 kube-controller-manager-ip-172-20-44-51.ca-central-1.compute.internal    100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m36s\n  kube-system                 kube-proxy-ip-172-20-44-51.ca-central-1.compute.internal                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m31s\n  kube-system                 kube-scheduler-ip-172-20-44-51.ca-central-1.compute.internal             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m16s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests     Limits\n  --------           --------     ------\n  cpu                1050m (52%)  0 (0%)\n  memory             300Mi (8%)   0 (0%)\n  ephemeral-storage  0 (0%)       0 (0%)\n  hugepages-1Gi      0 (0%)       0 (0%)\n  hugepages-2Mi      0 (0%)       0 (0%)\nEvents:\n  Type    Reason                   Age                  From        Message\n  ----    ------                   ----                 ----        -------\n  Normal  NodeHasSufficientMemory  7m6s (x8 over 7m6s)  kubelet     Node ip-172-20-44-51.ca-central-1.compute.internal status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    7m6s (x7 over 7m6s)  kubelet     Node ip-172-20-44-51.ca-central-1.compute.internal status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     7m6s (x7 over 7m6s)  kubelet     Node ip-172-20-44-51.ca-central-1.compute.internal status is now: NodeHasSufficientPID\n  Normal  Starting                 6m10s                kube-proxy  Starting kube-proxy.\n"
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1094
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:24.093: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:08:22.290: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6900e489-d058-493a-9e09-2e9b7b56835b" in namespace "projected-7639" to be "Succeeded or Failed"
Jul  6 19:08:22.331: INFO: Pod "downwardapi-volume-6900e489-d058-493a-9e09-2e9b7b56835b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.428308ms
Jul  6 19:08:24.363: INFO: Pod "downwardapi-volume-6900e489-d058-493a-9e09-2e9b7b56835b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07258898s
Jul  6 19:08:26.394: INFO: Pod "downwardapi-volume-6900e489-d058-493a-9e09-2e9b7b56835b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103648119s
Jul  6 19:08:28.430: INFO: Pod "downwardapi-volume-6900e489-d058-493a-9e09-2e9b7b56835b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139653487s
Jul  6 19:08:30.462: INFO: Pod "downwardapi-volume-6900e489-d058-493a-9e09-2e9b7b56835b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171137449s
STEP: Saw pod success
Jul  6 19:08:30.462: INFO: Pod "downwardapi-volume-6900e489-d058-493a-9e09-2e9b7b56835b" satisfied condition "Succeeded or Failed"
Jul  6 19:08:30.493: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod downwardapi-volume-6900e489-d058-493a-9e09-2e9b7b56835b container client-container: <nil>
STEP: delete the pod
Jul  6 19:08:30.563: INFO: Waiting for pod downwardapi-volume-6900e489-d058-493a-9e09-2e9b7b56835b to disappear
Jul  6 19:08:30.595: INFO: Pod downwardapi-volume-6900e489-d058-493a-9e09-2e9b7b56835b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 18 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul  6 19:08:06.064: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  6 19:08:06.064: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-nstf
STEP: Creating a pod to test atomic-volume-subpath
Jul  6 19:08:06.097: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-nstf" in namespace "provisioning-8427" to be "Succeeded or Failed"
Jul  6 19:08:06.130: INFO: Pod "pod-subpath-test-inlinevolume-nstf": Phase="Pending", Reason="", readiness=false. Elapsed: 33.22757ms
Jul  6 19:08:08.162: INFO: Pod "pod-subpath-test-inlinevolume-nstf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064485822s
Jul  6 19:08:10.197: INFO: Pod "pod-subpath-test-inlinevolume-nstf": Phase="Running", Reason="", readiness=true. Elapsed: 4.099898551s
Jul  6 19:08:12.228: INFO: Pod "pod-subpath-test-inlinevolume-nstf": Phase="Running", Reason="", readiness=true. Elapsed: 6.131280714s
Jul  6 19:08:14.261: INFO: Pod "pod-subpath-test-inlinevolume-nstf": Phase="Running", Reason="", readiness=true. Elapsed: 8.163458447s
Jul  6 19:08:16.293: INFO: Pod "pod-subpath-test-inlinevolume-nstf": Phase="Running", Reason="", readiness=true. Elapsed: 10.19547646s
... skipping 3 lines ...
Jul  6 19:08:24.420: INFO: Pod "pod-subpath-test-inlinevolume-nstf": Phase="Running", Reason="", readiness=true. Elapsed: 18.323004891s
Jul  6 19:08:26.451: INFO: Pod "pod-subpath-test-inlinevolume-nstf": Phase="Running", Reason="", readiness=true. Elapsed: 20.354270766s
Jul  6 19:08:28.482: INFO: Pod "pod-subpath-test-inlinevolume-nstf": Phase="Running", Reason="", readiness=true. Elapsed: 22.385335347s
Jul  6 19:08:30.515: INFO: Pod "pod-subpath-test-inlinevolume-nstf": Phase="Running", Reason="", readiness=true. Elapsed: 24.417394881s
Jul  6 19:08:32.546: INFO: Pod "pod-subpath-test-inlinevolume-nstf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.448513315s
STEP: Saw pod success
Jul  6 19:08:32.546: INFO: Pod "pod-subpath-test-inlinevolume-nstf" satisfied condition "Succeeded or Failed"
Jul  6 19:08:32.576: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-nstf container test-container-subpath-inlinevolume-nstf: <nil>
STEP: delete the pod
Jul  6 19:08:32.647: INFO: Waiting for pod pod-subpath-test-inlinevolume-nstf to disappear
Jul  6 19:08:32.677: INFO: Pod pod-subpath-test-inlinevolume-nstf no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-nstf
Jul  6 19:08:32.677: INFO: Deleting pod "pod-subpath-test-inlinevolume-nstf" in namespace "provisioning-8427"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":17,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:08:32.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":5,"skipped":22,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:32.975: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
Jul  6 19:08:25.291: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-755" to be "Succeeded or Failed"
Jul  6 19:08:25.322: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 30.602274ms
Jul  6 19:08:27.354: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062430389s
Jul  6 19:08:29.386: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094431034s
Jul  6 19:08:31.417: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12561614s
Jul  6 19:08:33.499: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.207533858s
Jul  6 19:08:33.499: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:08:33.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-755" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an image specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":2,"skipped":24,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:08:33.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4372" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":6,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:33.957: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
Jul  6 19:07:59.112: INFO: PersistentVolumeClaim csi-hostpathlcxk7 found but phase is Pending instead of Bound.
Jul  6 19:08:01.146: INFO: PersistentVolumeClaim csi-hostpathlcxk7 found but phase is Pending instead of Bound.
Jul  6 19:08:03.195: INFO: PersistentVolumeClaim csi-hostpathlcxk7 found but phase is Pending instead of Bound.
Jul  6 19:08:05.230: INFO: PersistentVolumeClaim csi-hostpathlcxk7 found and phase=Bound (6.149316135s)
STEP: Creating pod pod-subpath-test-dynamicpv-dq5k
STEP: Creating a pod to test subpath
Jul  6 19:08:05.324: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dq5k" in namespace "provisioning-9366" to be "Succeeded or Failed"
Jul  6 19:08:05.356: INFO: Pod "pod-subpath-test-dynamicpv-dq5k": Phase="Pending", Reason="", readiness=false. Elapsed: 31.270168ms
Jul  6 19:08:07.389: INFO: Pod "pod-subpath-test-dynamicpv-dq5k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064609659s
Jul  6 19:08:09.421: INFO: Pod "pod-subpath-test-dynamicpv-dq5k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095993558s
Jul  6 19:08:11.453: INFO: Pod "pod-subpath-test-dynamicpv-dq5k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128685125s
Jul  6 19:08:13.485: INFO: Pod "pod-subpath-test-dynamicpv-dq5k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160111989s
Jul  6 19:08:15.517: INFO: Pod "pod-subpath-test-dynamicpv-dq5k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.192208827s
Jul  6 19:08:17.549: INFO: Pod "pod-subpath-test-dynamicpv-dq5k": Phase="Pending", Reason="", readiness=false. Elapsed: 12.224779034s
Jul  6 19:08:19.585: INFO: Pod "pod-subpath-test-dynamicpv-dq5k": Phase="Pending", Reason="", readiness=false. Elapsed: 14.260444648s
Jul  6 19:08:21.617: INFO: Pod "pod-subpath-test-dynamicpv-dq5k": Phase="Pending", Reason="", readiness=false. Elapsed: 16.292323028s
Jul  6 19:08:23.651: INFO: Pod "pod-subpath-test-dynamicpv-dq5k": Phase="Pending", Reason="", readiness=false. Elapsed: 18.32664996s
Jul  6 19:08:25.683: INFO: Pod "pod-subpath-test-dynamicpv-dq5k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.358689655s
STEP: Saw pod success
Jul  6 19:08:25.683: INFO: Pod "pod-subpath-test-dynamicpv-dq5k" satisfied condition "Succeeded or Failed"
Jul  6 19:08:25.715: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-dq5k container test-container-volume-dynamicpv-dq5k: <nil>
STEP: delete the pod
Jul  6 19:08:25.785: INFO: Waiting for pod pod-subpath-test-dynamicpv-dq5k to disappear
Jul  6 19:08:25.817: INFO: Pod pod-subpath-test-dynamicpv-dq5k no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-dq5k
Jul  6 19:08:25.817: INFO: Deleting pod "pod-subpath-test-dynamicpv-dq5k" in namespace "provisioning-9366"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:39.610: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 206 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":2,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:40.376: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 129 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:40.777: INFO: Only supported for providers [gce gke] (not aws)
... skipping 84 lines ...
• [SLOW TEST:8.513 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:265
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":3,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:42.173: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 89 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-5f9aebf7-2282-4e14-8f8b-5de8dfa3e1de
STEP: Creating a pod to test consume secrets
Jul  6 19:08:40.629: INFO: Waiting up to 5m0s for pod "pod-secrets-ba3c2f19-38dc-404e-9a6e-853a0f8acbf1" in namespace "secrets-5070" to be "Succeeded or Failed"
Jul  6 19:08:40.661: INFO: Pod "pod-secrets-ba3c2f19-38dc-404e-9a6e-853a0f8acbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.91912ms
Jul  6 19:08:42.693: INFO: Pod "pod-secrets-ba3c2f19-38dc-404e-9a6e-853a0f8acbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064131033s
Jul  6 19:08:44.725: INFO: Pod "pod-secrets-ba3c2f19-38dc-404e-9a6e-853a0f8acbf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096023287s
STEP: Saw pod success
Jul  6 19:08:44.725: INFO: Pod "pod-secrets-ba3c2f19-38dc-404e-9a6e-853a0f8acbf1" satisfied condition "Succeeded or Failed"
Jul  6 19:08:44.757: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-secrets-ba3c2f19-38dc-404e-9a6e-853a0f8acbf1 container secret-volume-test: <nil>
STEP: delete the pod
Jul  6 19:08:44.827: INFO: Waiting for pod pod-secrets-ba3c2f19-38dc-404e-9a6e-853a0f8acbf1 to disappear
Jul  6 19:08:44.858: INFO: Pod pod-secrets-ba3c2f19-38dc-404e-9a6e-853a0f8acbf1 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:08:44.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5070" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a successful command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:500
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command","total":-1,"completed":7,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:08:45.727: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 41 lines ...
Jul  6 19:08:26.204: INFO: PersistentVolumeClaim pvc-nwm9d found but phase is Pending instead of Bound.
Jul  6 19:08:28.235: INFO: PersistentVolumeClaim pvc-nwm9d found and phase=Bound (6.129985915s)
Jul  6 19:08:28.235: INFO: Waiting up to 3m0s for PersistentVolume local-d4mnb to have phase Bound
Jul  6 19:08:28.266: INFO: PersistentVolume local-d4mnb found and phase=Bound (30.152779ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pc8t
STEP: Creating a pod to test subpath
Jul  6 19:08:28.358: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pc8t" in namespace "provisioning-1424" to be "Succeeded or Failed"
Jul  6 19:08:28.388: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t": Phase="Pending", Reason="", readiness=false. Elapsed: 30.262088ms
Jul  6 19:08:30.419: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061126158s
Jul  6 19:08:32.451: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092889379s
Jul  6 19:08:34.484: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126456624s
Jul  6 19:08:36.515: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157368382s
Jul  6 19:08:38.548: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.189757503s
STEP: Saw pod success
Jul  6 19:08:38.548: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t" satisfied condition "Succeeded or Failed"
Jul  6 19:08:38.578: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-pc8t container test-container-subpath-preprovisionedpv-pc8t: <nil>
STEP: delete the pod
Jul  6 19:08:38.652: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pc8t to disappear
Jul  6 19:08:38.683: INFO: Pod pod-subpath-test-preprovisionedpv-pc8t no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pc8t
Jul  6 19:08:38.683: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pc8t" in namespace "provisioning-1424"
STEP: Creating pod pod-subpath-test-preprovisionedpv-pc8t
STEP: Creating a pod to test subpath
Jul  6 19:08:38.748: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pc8t" in namespace "provisioning-1424" to be "Succeeded or Failed"
Jul  6 19:08:38.779: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t": Phase="Pending", Reason="", readiness=false. Elapsed: 31.814647ms
Jul  6 19:08:40.810: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062739252s
Jul  6 19:08:42.842: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094092307s
Jul  6 19:08:44.873: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125040715s
STEP: Saw pod success
Jul  6 19:08:44.873: INFO: Pod "pod-subpath-test-preprovisionedpv-pc8t" satisfied condition "Succeeded or Failed"
Jul  6 19:08:44.904: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-pc8t container test-container-subpath-preprovisionedpv-pc8t: <nil>
STEP: delete the pod
Jul  6 19:08:44.973: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pc8t to disappear
Jul  6 19:08:45.004: INFO: Pod pod-subpath-test-preprovisionedpv-pc8t no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pc8t
Jul  6 19:08:45.004: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pc8t" in namespace "provisioning-1424"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
... skipping 138 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":57,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Jul  6 19:08:41.121: INFO: PersistentVolumeClaim pvc-ghvlq found but phase is Pending instead of Bound.
Jul  6 19:08:43.152: INFO: PersistentVolumeClaim pvc-ghvlq found and phase=Bound (10.186826357s)
Jul  6 19:08:43.152: INFO: Waiting up to 3m0s for PersistentVolume local-ftz79 to have phase Bound
Jul  6 19:08:43.183: INFO: PersistentVolume local-ftz79 found and phase=Bound (30.837135ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-67mn
STEP: Creating a pod to test subpath
Jul  6 19:08:43.290: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-67mn" in namespace "provisioning-8139" to be "Succeeded or Failed"
Jul  6 19:08:43.320: INFO: Pod "pod-subpath-test-preprovisionedpv-67mn": Phase="Pending", Reason="", readiness=false. Elapsed: 30.533707ms
Jul  6 19:08:45.353: INFO: Pod "pod-subpath-test-preprovisionedpv-67mn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06260288s
Jul  6 19:08:47.385: INFO: Pod "pod-subpath-test-preprovisionedpv-67mn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095525738s
Jul  6 19:08:49.418: INFO: Pod "pod-subpath-test-preprovisionedpv-67mn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127609206s
Jul  6 19:08:51.449: INFO: Pod "pod-subpath-test-preprovisionedpv-67mn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158975165s
STEP: Saw pod success
Jul  6 19:08:51.449: INFO: Pod "pod-subpath-test-preprovisionedpv-67mn" satisfied condition "Succeeded or Failed"
Jul  6 19:08:51.480: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-67mn container test-container-subpath-preprovisionedpv-67mn: <nil>
STEP: delete the pod
Jul  6 19:08:51.548: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-67mn to disappear
Jul  6 19:08:51.579: INFO: Pod pod-subpath-test-preprovisionedpv-67mn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-67mn
Jul  6 19:08:51.579: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-67mn" in namespace "provisioning-8139"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":37,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Jul  6 19:08:45.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul  6 19:08:46.038: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 19:08:46.103: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1972" in namespace "provisioning-1972" to be "Succeeded or Failed"
Jul  6 19:08:46.133: INFO: Pod "hostpath-symlink-prep-provisioning-1972": Phase="Pending", Reason="", readiness=false. Elapsed: 30.050052ms
Jul  6 19:08:48.164: INFO: Pod "hostpath-symlink-prep-provisioning-1972": Phase="Running", Reason="", readiness=true. Elapsed: 2.060937944s
Jul  6 19:08:50.198: INFO: Pod "hostpath-symlink-prep-provisioning-1972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095198104s
STEP: Saw pod success
Jul  6 19:08:50.198: INFO: Pod "hostpath-symlink-prep-provisioning-1972" satisfied condition "Succeeded or Failed"
Jul  6 19:08:50.198: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1972" in namespace "provisioning-1972"
Jul  6 19:08:50.233: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1972" to be fully deleted
Jul  6 19:08:50.264: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-cmfr
STEP: Creating a pod to test subpath
Jul  6 19:08:50.297: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-cmfr" in namespace "provisioning-1972" to be "Succeeded or Failed"
Jul  6 19:08:50.328: INFO: Pod "pod-subpath-test-inlinevolume-cmfr": Phase="Pending", Reason="", readiness=false. Elapsed: 30.541517ms
Jul  6 19:08:52.359: INFO: Pod "pod-subpath-test-inlinevolume-cmfr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061578338s
Jul  6 19:08:54.390: INFO: Pod "pod-subpath-test-inlinevolume-cmfr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092685298s
STEP: Saw pod success
Jul  6 19:08:54.390: INFO: Pod "pod-subpath-test-inlinevolume-cmfr" satisfied condition "Succeeded or Failed"
Jul  6 19:08:54.421: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-cmfr container test-container-volume-inlinevolume-cmfr: <nil>
STEP: delete the pod
Jul  6 19:08:54.487: INFO: Waiting for pod pod-subpath-test-inlinevolume-cmfr to disappear
Jul  6 19:08:54.518: INFO: Pod pod-subpath-test-inlinevolume-cmfr no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-cmfr
Jul  6 19:08:54.518: INFO: Deleting pod "pod-subpath-test-inlinevolume-cmfr" in namespace "provisioning-1972"
STEP: Deleting pod
Jul  6 19:08:54.548: INFO: Deleting pod "pod-subpath-test-inlinevolume-cmfr" in namespace "provisioning-1972"
Jul  6 19:08:54.610: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1972" in namespace "provisioning-1972" to be "Succeeded or Failed"
Jul  6 19:08:54.640: INFO: Pod "hostpath-symlink-prep-provisioning-1972": Phase="Pending", Reason="", readiness=false. Elapsed: 30.384427ms
Jul  6 19:08:56.671: INFO: Pod "hostpath-symlink-prep-provisioning-1972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061315967s
Jul  6 19:08:58.703: INFO: Pod "hostpath-symlink-prep-provisioning-1972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093069151s
STEP: Saw pod success
Jul  6 19:08:58.703: INFO: Pod "hostpath-symlink-prep-provisioning-1972" satisfied condition "Succeeded or Failed"
Jul  6 19:08:58.703: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1972" in namespace "provisioning-1972"
Jul  6 19:08:58.739: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1972" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:08:58.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-1972" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:88.383 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:00.608: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 146 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:673
    should expand volume without restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:688
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":6,"skipped":38,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:01.259: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
• [SLOW TEST:57.664 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1050
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":2,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:09:01.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  6 19:09:01.500: INFO: Waiting up to 5m0s for pod "pod-2772cc5d-a166-47e0-a943-68d3ee20b199" in namespace "emptydir-8685" to be "Succeeded or Failed"
Jul  6 19:09:01.531: INFO: Pod "pod-2772cc5d-a166-47e0-a943-68d3ee20b199": Phase="Pending", Reason="", readiness=false. Elapsed: 31.581338ms
Jul  6 19:09:03.563: INFO: Pod "pod-2772cc5d-a166-47e0-a943-68d3ee20b199": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063367869s
STEP: Saw pod success
Jul  6 19:09:03.563: INFO: Pod "pod-2772cc5d-a166-47e0-a943-68d3ee20b199" satisfied condition "Succeeded or Failed"
Jul  6 19:09:03.595: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-2772cc5d-a166-47e0-a943-68d3ee20b199 container test-container: <nil>
STEP: delete the pod
Jul  6 19:09:03.670: INFO: Waiting for pod pod-2772cc5d-a166-47e0-a943-68d3ee20b199 to disappear
Jul  6 19:09:03.702: INFO: Pod pod-2772cc5d-a166-47e0-a943-68d3ee20b199 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 24 lines ...
Jul  6 19:08:57.266: INFO: PersistentVolumeClaim pvc-t2ggv found but phase is Pending instead of Bound.
Jul  6 19:08:59.297: INFO: PersistentVolumeClaim pvc-t2ggv found and phase=Bound (4.094276325s)
Jul  6 19:08:59.297: INFO: Waiting up to 3m0s for PersistentVolume local-csv4x to have phase Bound
Jul  6 19:08:59.327: INFO: PersistentVolume local-csv4x found and phase=Bound (30.511756ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bb7l
STEP: Creating a pod to test subpath
Jul  6 19:08:59.420: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bb7l" in namespace "provisioning-6987" to be "Succeeded or Failed"
Jul  6 19:08:59.451: INFO: Pod "pod-subpath-test-preprovisionedpv-bb7l": Phase="Pending", Reason="", readiness=false. Elapsed: 30.703465ms
Jul  6 19:09:01.483: INFO: Pod "pod-subpath-test-preprovisionedpv-bb7l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063063032s
Jul  6 19:09:03.515: INFO: Pod "pod-subpath-test-preprovisionedpv-bb7l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095019979s
STEP: Saw pod success
Jul  6 19:09:03.515: INFO: Pod "pod-subpath-test-preprovisionedpv-bb7l" satisfied condition "Succeeded or Failed"
Jul  6 19:09:03.546: INFO: Trying to get logs from node ip-172-20-61-241.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-bb7l container test-container-volume-preprovisionedpv-bb7l: <nil>
STEP: delete the pod
Jul  6 19:09:03.616: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bb7l to disappear
Jul  6 19:09:03.647: INFO: Pod pod-subpath-test-preprovisionedpv-bb7l no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bb7l
Jul  6 19:09:03.647: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bb7l" in namespace "provisioning-6987"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:04.189: INFO: Only supported for providers [gce gke] (not aws)
... skipping 43 lines ...
Jul  6 19:08:57.689: INFO: PersistentVolumeClaim pvc-fq4s4 found but phase is Pending instead of Bound.
Jul  6 19:08:59.721: INFO: PersistentVolumeClaim pvc-fq4s4 found and phase=Bound (16.287090084s)
Jul  6 19:08:59.721: INFO: Waiting up to 3m0s for PersistentVolume local-6xqpt to have phase Bound
Jul  6 19:08:59.752: INFO: PersistentVolume local-6xqpt found and phase=Bound (30.720765ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nht8
STEP: Creating a pod to test subpath
Jul  6 19:08:59.846: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nht8" in namespace "provisioning-3314" to be "Succeeded or Failed"
Jul  6 19:08:59.877: INFO: Pod "pod-subpath-test-preprovisionedpv-nht8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.842441ms
Jul  6 19:09:01.909: INFO: Pod "pod-subpath-test-preprovisionedpv-nht8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063390676s
Jul  6 19:09:03.942: INFO: Pod "pod-subpath-test-preprovisionedpv-nht8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096119701s
STEP: Saw pod success
Jul  6 19:09:03.942: INFO: Pod "pod-subpath-test-preprovisionedpv-nht8" satisfied condition "Succeeded or Failed"
Jul  6 19:09:03.976: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-nht8 container test-container-volume-preprovisionedpv-nht8: <nil>
STEP: delete the pod
Jul  6 19:09:04.065: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nht8 to disappear
Jul  6 19:09:04.101: INFO: Pod pod-subpath-test-preprovisionedpv-nht8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nht8
Jul  6 19:09:04.101: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nht8" in namespace "provisioning-3314"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":41,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":53,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:09:03.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:09:03.975: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df9b1ee6-6b99-4c8b-9c20-677ecb3e4ed9" in namespace "projected-5101" to be "Succeeded or Failed"
Jul  6 19:09:04.008: INFO: Pod "downwardapi-volume-df9b1ee6-6b99-4c8b-9c20-677ecb3e4ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.367015ms
Jul  6 19:09:06.040: INFO: Pod "downwardapi-volume-df9b1ee6-6b99-4c8b-9c20-677ecb3e4ed9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064580063s
STEP: Saw pod success
Jul  6 19:09:06.040: INFO: Pod "downwardapi-volume-df9b1ee6-6b99-4c8b-9c20-677ecb3e4ed9" satisfied condition "Succeeded or Failed"
Jul  6 19:09:06.071: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod downwardapi-volume-df9b1ee6-6b99-4c8b-9c20-677ecb3e4ed9 container client-container: <nil>
STEP: delete the pod
Jul  6 19:09:06.141: INFO: Waiting for pod downwardapi-volume-df9b1ee6-6b99-4c8b-9c20-677ecb3e4ed9 to disappear
Jul  6 19:09:06.172: INFO: Pod downwardapi-volume-df9b1ee6-6b99-4c8b-9c20-677ecb3e4ed9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:06.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5101" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:06.257: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul  6 19:09:01.893: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  6 19:09:01.893: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-8gds
STEP: Creating a pod to test subpath
Jul  6 19:09:01.928: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8gds" in namespace "provisioning-2920" to be "Succeeded or Failed"
Jul  6 19:09:01.959: INFO: Pod "pod-subpath-test-inlinevolume-8gds": Phase="Pending", Reason="", readiness=false. Elapsed: 31.393228ms
Jul  6 19:09:03.991: INFO: Pod "pod-subpath-test-inlinevolume-8gds": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063452384s
Jul  6 19:09:06.024: INFO: Pod "pod-subpath-test-inlinevolume-8gds": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096276668s
STEP: Saw pod success
Jul  6 19:09:06.024: INFO: Pod "pod-subpath-test-inlinevolume-8gds" satisfied condition "Succeeded or Failed"
Jul  6 19:09:06.055: INFO: Trying to get logs from node ip-172-20-61-241.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-8gds container test-container-volume-inlinevolume-8gds: <nil>
STEP: delete the pod
Jul  6 19:09:06.126: INFO: Waiting for pod pod-subpath-test-inlinevolume-8gds to disappear
Jul  6 19:09:06.157: INFO: Pod pod-subpath-test-inlinevolume-8gds no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-8gds
Jul  6 19:09:06.157: INFO: Deleting pod "pod-subpath-test-inlinevolume-8gds" in namespace "provisioning-2920"
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver local doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":17,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:06.311: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 39 lines ...
Jul  6 19:08:56.580: INFO: PersistentVolumeClaim pvc-lw2n6 found but phase is Pending instead of Bound.
Jul  6 19:08:58.612: INFO: PersistentVolumeClaim pvc-lw2n6 found and phase=Bound (8.162321862s)
Jul  6 19:08:58.612: INFO: Waiting up to 3m0s for PersistentVolume local-b9jz9 to have phase Bound
Jul  6 19:08:58.643: INFO: PersistentVolume local-b9jz9 found and phase=Bound (31.308622ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-64jp
STEP: Creating a pod to test subpath
Jul  6 19:08:58.740: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-64jp" in namespace "provisioning-4110" to be "Succeeded or Failed"
Jul  6 19:08:58.771: INFO: Pod "pod-subpath-test-preprovisionedpv-64jp": Phase="Pending", Reason="", readiness=false. Elapsed: 30.881401ms
Jul  6 19:09:00.803: INFO: Pod "pod-subpath-test-preprovisionedpv-64jp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062933708s
Jul  6 19:09:02.836: INFO: Pod "pod-subpath-test-preprovisionedpv-64jp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095501768s
Jul  6 19:09:04.867: INFO: Pod "pod-subpath-test-preprovisionedpv-64jp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126630555s
Jul  6 19:09:06.899: INFO: Pod "pod-subpath-test-preprovisionedpv-64jp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158080097s
STEP: Saw pod success
Jul  6 19:09:06.899: INFO: Pod "pod-subpath-test-preprovisionedpv-64jp" satisfied condition "Succeeded or Failed"
Jul  6 19:09:06.929: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-64jp container test-container-volume-preprovisionedpv-64jp: <nil>
STEP: delete the pod
Jul  6 19:09:06.999: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-64jp to disappear
Jul  6 19:09:07.030: INFO: Pod pod-subpath-test-preprovisionedpv-64jp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-64jp
Jul  6 19:09:07.030: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-64jp" in namespace "provisioning-4110"
... skipping 100 lines ...
Jul  6 19:08:56.253: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:08:56.522: INFO: Exec stderr: ""
Jul  6 19:08:58.618: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-5370"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-5370"/host; echo host > "/var/lib/kubelet/mount-propagation-5370"/host/file] Namespace:mount-propagation-5370 PodName:hostexec-ip-172-20-61-17.ca-central-1.compute.internal-q92cr ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jul  6 19:08:58.618: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:08:58.948: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5370 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:08:58.948: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:08:59.258: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Jul  6 19:08:59.289: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5370 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:08:59.289: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:08:59.598: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:08:59.629: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5370 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:08:59.629: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:08:59.931: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:08:59.962: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5370 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:08:59.962: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:00.253: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:09:00.287: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5370 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:00.287: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:00.558: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Jul  6 19:09:00.590: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5370 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:00.590: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:00.862: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Jul  6 19:09:00.894: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5370 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:00.894: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:01.191: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Jul  6 19:09:01.222: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5370 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:01.222: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:01.489: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:09:01.521: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5370 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:01.521: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:01.794: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:09:01.826: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5370 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:01.826: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:02.109: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Jul  6 19:09:02.141: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5370 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:02.141: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:02.407: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:09:02.439: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5370 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:02.439: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:02.711: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:09:02.742: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5370 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:02.742: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:03.029: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Jul  6 19:09:03.061: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5370 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:03.061: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:03.321: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:09:03.353: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5370 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:03.353: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:03.629: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:09:03.660: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5370 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:03.660: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:03.997: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:09:04.038: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5370 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:04.038: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:04.316: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:09:04.347: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5370 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:04.347: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:04.602: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:09:04.633: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5370 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:04.633: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:04.908: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Jul  6 19:09:04.940: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5370 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:09:04.940: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:05.194: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jul  6 19:09:05.194: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c pidof kubelet] Namespace:mount-propagation-5370 PodName:hostexec-ip-172-20-61-17.ca-central-1.compute.internal-q92cr ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jul  6 19:09:05.194: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:05.452: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 3927 -m cat "/var/lib/kubelet/mount-propagation-5370/host/file"] Namespace:mount-propagation-5370 PodName:hostexec-ip-172-20-61-17.ca-central-1.compute.internal-q92cr ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jul  6 19:09:05.452: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:09:05.720: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 3927 -m cat "/var/lib/kubelet/mount-propagation-5370/master/file"] Namespace:mount-propagation-5370 PodName:hostexec-ip-172-20-61-17.ca-central-1.compute.internal-q92cr ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jul  6 19:09:05.720: INFO: >>> kubeConfig: /root/.kube/config
... skipping 29 lines ...
• [SLOW TEST:28.980 seconds]
[sig-node] Mount propagation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should propagate mounts within defined scopes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:83
------------------------------
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts within defined scopes","total":-1,"completed":3,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:08.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8834" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":9,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:08.766: INFO: Driver emptydir doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":39,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:09:07.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-80d4d628-67db-4190-9b63-2dfd6525e749
STEP: Creating a pod to test consume configMaps
Jul  6 19:09:07.745: INFO: Waiting up to 5m0s for pod "pod-configmaps-8717e180-869f-440b-bbe9-b6d2249b94e1" in namespace "configmap-6190" to be "Succeeded or Failed"
Jul  6 19:09:07.776: INFO: Pod "pod-configmaps-8717e180-869f-440b-bbe9-b6d2249b94e1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.728954ms
Jul  6 19:09:09.808: INFO: Pod "pod-configmaps-8717e180-869f-440b-bbe9-b6d2249b94e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062969302s
STEP: Saw pod success
Jul  6 19:09:09.808: INFO: Pod "pod-configmaps-8717e180-869f-440b-bbe9-b6d2249b94e1" satisfied condition "Succeeded or Failed"
Jul  6 19:09:09.853: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-configmaps-8717e180-869f-440b-bbe9-b6d2249b94e1 container agnhost-container: <nil>
STEP: delete the pod
Jul  6 19:09:09.926: INFO: Waiting for pod pod-configmaps-8717e180-869f-440b-bbe9-b6d2249b94e1 to disappear
Jul  6 19:09:09.957: INFO: Pod pod-configmaps-8717e180-869f-440b-bbe9-b6d2249b94e1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:09.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6190" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:10.036: INFO: Only supported for providers [azure] (not aws)
... skipping 137 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-687ac38e-fd72-4f2c-8669-abaddca6386d
STEP: Creating a pod to test consume configMaps
Jul  6 19:09:06.528: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e6a17771-b18e-4bd3-b627-ebc7aaee10e4" in namespace "projected-8638" to be "Succeeded or Failed"
Jul  6 19:09:06.559: INFO: Pod "pod-projected-configmaps-e6a17771-b18e-4bd3-b627-ebc7aaee10e4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.456923ms
Jul  6 19:09:08.592: INFO: Pod "pod-projected-configmaps-e6a17771-b18e-4bd3-b627-ebc7aaee10e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064807107s
Jul  6 19:09:10.624: INFO: Pod "pod-projected-configmaps-e6a17771-b18e-4bd3-b627-ebc7aaee10e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096624738s
STEP: Saw pod success
Jul  6 19:09:10.624: INFO: Pod "pod-projected-configmaps-e6a17771-b18e-4bd3-b627-ebc7aaee10e4" satisfied condition "Succeeded or Failed"
Jul  6 19:09:10.656: INFO: Trying to get logs from node ip-172-20-61-241.ca-central-1.compute.internal pod pod-projected-configmaps-e6a17771-b18e-4bd3-b627-ebc7aaee10e4 container agnhost-container: <nil>
STEP: delete the pod
Jul  6 19:09:10.736: INFO: Waiting for pod pod-projected-configmaps-e6a17771-b18e-4bd3-b627-ebc7aaee10e4 to disappear
Jul  6 19:09:10.767: INFO: Pod pod-projected-configmaps-e6a17771-b18e-4bd3-b627-ebc7aaee10e4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:10.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8638" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:10.855: INFO: Only supported for providers [vsphere] (not aws)
... skipping 70 lines ...
• [SLOW TEST:72.000 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:11.654: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 209 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:166
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:12.190: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 159 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:12.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2453" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":10,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:15.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6637" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
Jul  6 19:09:14.421: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1806 explain e2e-test-crd-publish-openapi-1873-crds.spec'
Jul  6 19:09:14.682: INFO: stderr: ""
Jul  6 19:09:14.682: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-1873-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jul  6 19:09:14.682: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1806 explain e2e-test-crd-publish-openapi-1873-crds.spec.bars'
Jul  6 19:09:14.930: INFO: stderr: ""
Jul  6 19:09:14.931: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-1873-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jul  6 19:09:14.931: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1806 explain e2e-test-crd-publish-openapi-1873-crds.spec.bars2'
Jul  6 19:09:15.180: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:17.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1806" for this suite.
... skipping 2 lines ...
• [SLOW TEST:9.267 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:17.923: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 116 lines ...
Jul  6 19:08:15.995: INFO: PersistentVolumeClaim pvc-t6t4t found and phase=Bound (2.072125306s)
STEP: Deleting the previously created pod
Jul  6 19:08:31.159: INFO: Deleting pod "pvc-volume-tester-vfwdj" in namespace "csi-mock-volumes-1607"
Jul  6 19:08:31.194: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vfwdj" to be fully deleted
STEP: Checking CSI driver logs
Jul  6 19:08:41.296: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IlVueDdsWkpRTWh4OEViR1FpSHZWbHFuMndwazFSQlhyR3FGQkk0THo1d28ifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2MjU1OTkxMDQsImlhdCI6MTYyNTU5ODUwNCwiaXNzIjoiaHR0cHM6Ly9rOHMta29wcy1wcm93LnMzLnVzLXdlc3QtMS5hbWF6b25hd3MuY29tL2tvcHMtZ3JpZC1zY2VuYXJpby1hd3MtY2xvdWQtY29udHJvbGxlci1tYW5hZ2VyLWlyc2EvZGlzY292ZXJ5Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjc2ktbW9jay12b2x1bWVzLTE2MDciLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLXZmd2RqIiwidWlkIjoiODNkYTAxYTAtNmJhNC00MDVmLWE4NGQtYjJlZjA4NjFlMTUwIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiYmJkZTYwNmMtZDhjMi00MjQyLWI1MTgtMzlkMzhkODJjMzlhIn19LCJuYmYiOjE2MjU1OTg1MDQsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTE2MDc6ZGVmYXVsdCJ9.j5lbcc5KMfIsHQKcD_7n0ekbP-hPi0HDHcBsqPKhgXmKxYZKVJpq1W7rPen03dtaWZmdpAmcPVY1wJYqeSZOk5oZGcDg2lEtZulzxCJHYFEiFOo9QmSbuysdgP3zvPtqhd4zpmdHu5-u_rWqBogQKXOw7kZEFgyjdFfGLzRssDWnLw4HTq3zQHFW1fyjcGWv2xG2anQu4AHERqFZt4igdwOdzoAueaiZCiZ4jTnjYX2iO6ZkmMR0BRHSKFjWEIhGK_qXvAh8gAHCHklmoB_OLLlWpdojc4VMFZDTbjhZ05kqfSBF3n9Yya1V7X__Iq2PFYNkcWKmHR7GWZntfgknOw","expirationTimestamp":"2021-07-06T19:18:24Z"}}
Jul  6 19:08:41.296: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/83da01a0-6ba4-405f-a84d-b2ef0861e150/volumes/kubernetes.io~csi/pvc-d306ae83-02b7-4f83-92c7-d8ae3cb1dd0c/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-vfwdj
Jul  6 19:08:41.296: INFO: Deleting pod "pvc-volume-tester-vfwdj" in namespace "csi-mock-volumes-1607"
STEP: Deleting claim pvc-t6t4t
Jul  6 19:08:41.391: INFO: Waiting up to 2m0s for PersistentVolume pvc-d306ae83-02b7-4f83-92c7-d8ae3cb1dd0c to get deleted
Jul  6 19:08:41.424: INFO: PersistentVolume pvc-d306ae83-02b7-4f83-92c7-d8ae3cb1dd0c found and phase=Released (32.690628ms)
Jul  6 19:08:43.456: INFO: PersistentVolume pvc-d306ae83-02b7-4f83-92c7-d8ae3cb1dd0c was removed
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497
    token should be plumbed down when csiServiceAccountTokenEnabled=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":3,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:18.760: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 102 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:22.281: INFO: Only supported for providers [azure] (not aws)
... skipping 94 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":39,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:24.209: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 78 lines ...
• [SLOW TEST:60.285 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":44,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:24.409: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
• [SLOW TEST:16.729 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":10,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:25.548: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:25.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4590" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:25.825: INFO: Only supported for providers [azure] (not aws)
... skipping 85 lines ...
STEP: Registering slow webhook via the AdmissionRegistration API
Jul  6 19:08:40.413: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:08:50.581: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:09:00.679: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:09:10.778: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:09:20.843: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:09:20.844: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0001975b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 473 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:09:20.844: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0001975b0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2188
------------------------------
SSS
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":4,"skipped":56,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:25.883: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 115 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jul  6 19:09:24.415: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 19:09:24.446: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-7796
STEP: Creating a pod to test subpath
Jul  6 19:09:24.480: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7796" in namespace "provisioning-7607" to be "Succeeded or Failed"
Jul  6 19:09:24.513: INFO: Pod "pod-subpath-test-inlinevolume-7796": Phase="Pending", Reason="", readiness=false. Elapsed: 32.856586ms
Jul  6 19:09:26.544: INFO: Pod "pod-subpath-test-inlinevolume-7796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064459892s
Jul  6 19:09:28.575: INFO: Pod "pod-subpath-test-inlinevolume-7796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095515169s
STEP: Saw pod success
Jul  6 19:09:28.575: INFO: Pod "pod-subpath-test-inlinevolume-7796" satisfied condition "Succeeded or Failed"
Jul  6 19:09:28.607: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-7796 container test-container-subpath-inlinevolume-7796: <nil>
STEP: delete the pod
Jul  6 19:09:28.682: INFO: Waiting for pod pod-subpath-test-inlinevolume-7796 to disappear
Jul  6 19:09:28.714: INFO: Pod pod-subpath-test-inlinevolume-7796 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-7796
Jul  6 19:09:28.715: INFO: Deleting pod "pod-subpath-test-inlinevolume-7796" in namespace "provisioning-7607"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:28.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7607" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:28.859: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 98 lines ...
Jul  6 19:08:59.005: INFO: Creating resource for dynamic PV
Jul  6 19:08:59.005: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-2158jbl54
STEP: creating a claim
STEP: Expanding non-expandable pvc
Jul  6 19:08:59.100: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul  6 19:08:59.166: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:01.228: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:03.232: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:05.229: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:07.229: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:09.255: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:11.228: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:13.230: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:15.229: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:17.228: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:19.229: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:21.229: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:23.231: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:25.228: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:27.229: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:29.236: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2158jbl54",
  	... // 2 identical fields
  }

Jul  6 19:09:29.300: INFO: Error updating pvc awskkdc4: PersistentVolumeClaim "awskkdc4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":9,"skipped":38,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:16.696 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":11,"skipped":72,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:29.804: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
... skipping 133 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:31.183: INFO: Only supported for providers [openstack] (not aws)
... skipping 44 lines ...
Jul  6 19:09:26.291: INFO: PersistentVolumeClaim pvc-hpt6f found but phase is Pending instead of Bound.
Jul  6 19:09:28.323: INFO: PersistentVolumeClaim pvc-hpt6f found and phase=Bound (4.095664534s)
Jul  6 19:09:28.323: INFO: Waiting up to 3m0s for PersistentVolume local-z74fl to have phase Bound
Jul  6 19:09:28.354: INFO: PersistentVolume local-z74fl found and phase=Bound (31.376257ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-nr4p
STEP: Creating a pod to test exec-volume-test
Jul  6 19:09:28.450: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-nr4p" in namespace "volume-7840" to be "Succeeded or Failed"
Jul  6 19:09:28.481: INFO: Pod "exec-volume-test-preprovisionedpv-nr4p": Phase="Pending", Reason="", readiness=false. Elapsed: 31.394271ms
Jul  6 19:09:30.514: INFO: Pod "exec-volume-test-preprovisionedpv-nr4p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064161279s
STEP: Saw pod success
Jul  6 19:09:30.514: INFO: Pod "exec-volume-test-preprovisionedpv-nr4p" satisfied condition "Succeeded or Failed"
Jul  6 19:09:30.547: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-nr4p container exec-container-preprovisionedpv-nr4p: <nil>
STEP: delete the pod
Jul  6 19:09:30.625: INFO: Waiting for pod exec-volume-test-preprovisionedpv-nr4p to disappear
Jul  6 19:09:30.656: INFO: Pod exec-volume-test-preprovisionedpv-nr4p no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-nr4p
Jul  6 19:09:30.656: INFO: Deleting pod "exec-volume-test-preprovisionedpv-nr4p" in namespace "volume-7840"
... skipping 33 lines ...
Jul  6 19:09:25.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Jul  6 19:09:25.819: INFO: Waiting up to 5m0s for pod "var-expansion-60b390f2-996b-4f31-9a76-e76a77da2370" in namespace "var-expansion-7530" to be "Succeeded or Failed"
Jul  6 19:09:25.850: INFO: Pod "var-expansion-60b390f2-996b-4f31-9a76-e76a77da2370": Phase="Pending", Reason="", readiness=false. Elapsed: 31.605403ms
Jul  6 19:09:27.885: INFO: Pod "var-expansion-60b390f2-996b-4f31-9a76-e76a77da2370": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066128224s
Jul  6 19:09:29.918: INFO: Pod "var-expansion-60b390f2-996b-4f31-9a76-e76a77da2370": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099012316s
Jul  6 19:09:31.950: INFO: Pod "var-expansion-60b390f2-996b-4f31-9a76-e76a77da2370": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131027635s
Jul  6 19:09:33.981: INFO: Pod "var-expansion-60b390f2-996b-4f31-9a76-e76a77da2370": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162647053s
STEP: Saw pod success
Jul  6 19:09:33.981: INFO: Pod "var-expansion-60b390f2-996b-4f31-9a76-e76a77da2370" satisfied condition "Succeeded or Failed"
Jul  6 19:09:34.013: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod var-expansion-60b390f2-996b-4f31-9a76-e76a77da2370 container dapi-container: <nil>
STEP: delete the pod
Jul  6 19:09:34.081: INFO: Waiting for pod var-expansion-60b390f2-996b-4f31-9a76-e76a77da2370 to disappear
Jul  6 19:09:34.112: INFO: Pod var-expansion-60b390f2-996b-4f31-9a76-e76a77da2370 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.561 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":80,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:34.224: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 116 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-c33a3f7a-1e49-43f2-bf58-84a47b5eff26
STEP: Creating a pod to test consume configMaps
Jul  6 19:09:26.136: INFO: Waiting up to 5m0s for pod "pod-configmaps-66b299ed-229c-4048-a528-44ff1d59f4e5" in namespace "configmap-1331" to be "Succeeded or Failed"
Jul  6 19:09:26.167: INFO: Pod "pod-configmaps-66b299ed-229c-4048-a528-44ff1d59f4e5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.670452ms
Jul  6 19:09:28.198: INFO: Pod "pod-configmaps-66b299ed-229c-4048-a528-44ff1d59f4e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062095558s
Jul  6 19:09:30.241: INFO: Pod "pod-configmaps-66b299ed-229c-4048-a528-44ff1d59f4e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105121336s
Jul  6 19:09:32.272: INFO: Pod "pod-configmaps-66b299ed-229c-4048-a528-44ff1d59f4e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13640697s
Jul  6 19:09:34.303: INFO: Pod "pod-configmaps-66b299ed-229c-4048-a528-44ff1d59f4e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.167290479s
STEP: Saw pod success
Jul  6 19:09:34.303: INFO: Pod "pod-configmaps-66b299ed-229c-4048-a528-44ff1d59f4e5" satisfied condition "Succeeded or Failed"
Jul  6 19:09:34.334: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-configmaps-66b299ed-229c-4048-a528-44ff1d59f4e5 container agnhost-container: <nil>
STEP: delete the pod
Jul  6 19:09:34.405: INFO: Waiting for pod pod-configmaps-66b299ed-229c-4048-a528-44ff1d59f4e5 to disappear
Jul  6 19:09:34.437: INFO: Pod pod-configmaps-66b299ed-229c-4048-a528-44ff1d59f4e5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.592 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":61,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:34.520: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:34.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1375" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":6,"skipped":66,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:34.877: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:35.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-4747" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":7,"skipped":79,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:35.190: INFO: Only supported for providers [gce gke] (not aws)
... skipping 14 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:09:32.202: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
Jul  6 19:09:32.360: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  6 19:09:32.360: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-jkf6
STEP: Creating a pod to test subpath
Jul  6 19:09:32.398: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-jkf6" in namespace "provisioning-63" to be "Succeeded or Failed"
Jul  6 19:09:32.433: INFO: Pod "pod-subpath-test-inlinevolume-jkf6": Phase="Pending", Reason="", readiness=false. Elapsed: 35.166235ms
Jul  6 19:09:34.465: INFO: Pod "pod-subpath-test-inlinevolume-jkf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067077994s
Jul  6 19:09:36.497: INFO: Pod "pod-subpath-test-inlinevolume-jkf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099049485s
STEP: Saw pod success
Jul  6 19:09:36.497: INFO: Pod "pod-subpath-test-inlinevolume-jkf6" satisfied condition "Succeeded or Failed"
Jul  6 19:09:36.529: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-jkf6 container test-container-subpath-inlinevolume-jkf6: <nil>
STEP: delete the pod
Jul  6 19:09:36.613: INFO: Waiting for pod pod-subpath-test-inlinevolume-jkf6 to disappear
Jul  6 19:09:36.644: INFO: Pod pod-subpath-test-inlinevolume-jkf6 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-jkf6
Jul  6 19:09:36.644: INFO: Deleting pod "pod-subpath-test-inlinevolume-jkf6" in namespace "provisioning-63"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:36.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-63" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":34,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:75.741 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:36.834: INFO: Only supported for providers [gce gke] (not aws)
... skipping 114 lines ...
• [SLOW TEST:10.972 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":5,"skipped":70,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.420 seconds]
[sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":6,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:37.625: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 99 lines ...
Jul  6 19:08:44.001: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathd4jjb] to have phase Bound
Jul  6 19:08:44.031: INFO: PersistentVolumeClaim csi-hostpathd4jjb found but phase is Pending instead of Bound.
Jul  6 19:08:46.063: INFO: PersistentVolumeClaim csi-hostpathd4jjb found but phase is Pending instead of Bound.
Jul  6 19:08:48.094: INFO: PersistentVolumeClaim csi-hostpathd4jjb found and phase=Bound (4.093125916s)
STEP: Expanding non-expandable pvc
Jul  6 19:08:48.155: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul  6 19:08:48.217: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:08:50.280: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:08:52.280: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:08:54.280: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:08:56.279: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:08:58.281: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:09:00.283: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:09:02.279: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:09:04.281: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:09:06.279: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:09:08.280: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:09:10.280: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:09:12.289: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:09:14.284: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:09:16.311: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:09:18.280: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 19:09:18.343: INFO: Error updating pvc csi-hostpathd4jjb: persistentvolumeclaims "csi-hostpathd4jjb" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul  6 19:09:18.343: INFO: Deleting PersistentVolumeClaim "csi-hostpathd4jjb"
Jul  6 19:09:18.393: INFO: Waiting up to 5m0s for PersistentVolume pvc-ab091cb2-1ba7-438c-aeca-d44bbec07b71 to get deleted
Jul  6 19:09:18.423: INFO: PersistentVolume pvc-ab091cb2-1ba7-438c-aeca-d44bbec07b71 found and phase=Released (30.199333ms)
Jul  6 19:09:23.455: INFO: PersistentVolume pvc-ab091cb2-1ba7-438c-aeca-d44bbec07b71 was removed
STEP: Deleting sc
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":4,"skipped":47,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 111 lines ...
Jul  6 19:09:35.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jul  6 19:09:35.407: INFO: Waiting up to 5m0s for pod "security-context-348700ef-5bda-4876-a2a2-e625db9cc288" in namespace "security-context-5442" to be "Succeeded or Failed"
Jul  6 19:09:35.438: INFO: Pod "security-context-348700ef-5bda-4876-a2a2-e625db9cc288": Phase="Pending", Reason="", readiness=false. Elapsed: 30.726208ms
Jul  6 19:09:37.472: INFO: Pod "security-context-348700ef-5bda-4876-a2a2-e625db9cc288": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064802008s
Jul  6 19:09:39.504: INFO: Pod "security-context-348700ef-5bda-4876-a2a2-e625db9cc288": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096653156s
Jul  6 19:09:41.536: INFO: Pod "security-context-348700ef-5bda-4876-a2a2-e625db9cc288": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128797403s
Jul  6 19:09:43.568: INFO: Pod "security-context-348700ef-5bda-4876-a2a2-e625db9cc288": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.16062762s
STEP: Saw pod success
Jul  6 19:09:43.568: INFO: Pod "security-context-348700ef-5bda-4876-a2a2-e625db9cc288" satisfied condition "Succeeded or Failed"
Jul  6 19:09:43.607: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod security-context-348700ef-5bda-4876-a2a2-e625db9cc288 container test-container: <nil>
STEP: delete the pod
Jul  6 19:09:43.673: INFO: Waiting for pod security-context-348700ef-5bda-4876-a2a2-e625db9cc288 to disappear
Jul  6 19:09:43.704: INFO: Pod security-context-348700ef-5bda-4876-a2a2-e625db9cc288 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 33 lines ...
• [SLOW TEST:6.813 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":8,"skipped":87,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":71,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:43.792: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:44.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9385" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":9,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:45.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4949" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":5,"skipped":51,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul  6 19:09:37.854: INFO: Waiting up to 5m0s for pod "pod-75cb06be-74f1-4dce-85a7-60c639082c08" in namespace "emptydir-4934" to be "Succeeded or Failed"
Jul  6 19:09:37.885: INFO: Pod "pod-75cb06be-74f1-4dce-85a7-60c639082c08": Phase="Pending", Reason="", readiness=false. Elapsed: 30.711941ms
Jul  6 19:09:39.916: INFO: Pod "pod-75cb06be-74f1-4dce-85a7-60c639082c08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061863482s
Jul  6 19:09:41.949: INFO: Pod "pod-75cb06be-74f1-4dce-85a7-60c639082c08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094722045s
Jul  6 19:09:43.980: INFO: Pod "pod-75cb06be-74f1-4dce-85a7-60c639082c08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126605178s
Jul  6 19:09:46.011: INFO: Pod "pod-75cb06be-74f1-4dce-85a7-60c639082c08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157444118s
STEP: Saw pod success
Jul  6 19:09:46.011: INFO: Pod "pod-75cb06be-74f1-4dce-85a7-60c639082c08" satisfied condition "Succeeded or Failed"
Jul  6 19:09:46.042: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-75cb06be-74f1-4dce-85a7-60c639082c08 container test-container: <nil>
STEP: delete the pod
Jul  6 19:09:46.109: INFO: Waiting for pod pod-75cb06be-74f1-4dce-85a7-60c639082c08 to disappear
Jul  6 19:09:46.139: INFO: Pod pod-75cb06be-74f1-4dce-85a7-60c639082c08 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on tmpfs should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":7,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:46.213: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 138 lines ...
Jul  6 19:09:41.392: INFO: PersistentVolumeClaim pvc-n7qjr found but phase is Pending instead of Bound.
Jul  6 19:09:43.425: INFO: PersistentVolumeClaim pvc-n7qjr found and phase=Bound (6.128621713s)
Jul  6 19:09:43.425: INFO: Waiting up to 3m0s for PersistentVolume local-278bf to have phase Bound
Jul  6 19:09:43.459: INFO: PersistentVolume local-278bf found and phase=Bound (33.90526ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-jdjn
STEP: Creating a pod to test exec-volume-test
Jul  6 19:09:43.553: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-jdjn" in namespace "volume-8757" to be "Succeeded or Failed"
Jul  6 19:09:43.607: INFO: Pod "exec-volume-test-preprovisionedpv-jdjn": Phase="Pending", Reason="", readiness=false. Elapsed: 53.383979ms
Jul  6 19:09:45.639: INFO: Pod "exec-volume-test-preprovisionedpv-jdjn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.085377612s
STEP: Saw pod success
Jul  6 19:09:45.639: INFO: Pod "exec-volume-test-preprovisionedpv-jdjn" satisfied condition "Succeeded or Failed"
Jul  6 19:09:45.670: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-jdjn container exec-container-preprovisionedpv-jdjn: <nil>
STEP: delete the pod
Jul  6 19:09:45.739: INFO: Waiting for pod exec-volume-test-preprovisionedpv-jdjn to disappear
Jul  6 19:09:45.770: INFO: Pod exec-volume-test-preprovisionedpv-jdjn no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-jdjn
Jul  6 19:09:45.770: INFO: Deleting pod "exec-volume-test-preprovisionedpv-jdjn" in namespace "volume-8757"
... skipping 24 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":12,"skipped":109,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:46.775: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 166 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-b911e30a-64a8-45e8-9aa4-c423139d9a9a
STEP: Creating a pod to test consume configMaps
Jul  6 19:09:47.051: INFO: Waiting up to 5m0s for pod "pod-configmaps-321f3812-6d59-45f6-ac98-9c1af8c39470" in namespace "configmap-8082" to be "Succeeded or Failed"
Jul  6 19:09:47.082: INFO: Pod "pod-configmaps-321f3812-6d59-45f6-ac98-9c1af8c39470": Phase="Pending", Reason="", readiness=false. Elapsed: 31.240887ms
Jul  6 19:09:49.114: INFO: Pod "pod-configmaps-321f3812-6d59-45f6-ac98-9c1af8c39470": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063228164s
STEP: Saw pod success
Jul  6 19:09:49.115: INFO: Pod "pod-configmaps-321f3812-6d59-45f6-ac98-9c1af8c39470" satisfied condition "Succeeded or Failed"
Jul  6 19:09:49.156: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-configmaps-321f3812-6d59-45f6-ac98-9c1af8c39470 container agnhost-container: <nil>
STEP: delete the pod
Jul  6 19:09:49.257: INFO: Waiting for pod pod-configmaps-321f3812-6d59-45f6-ac98-9c1af8c39470 to disappear
Jul  6 19:09:49.293: INFO: Pod pod-configmaps-321f3812-6d59-45f6-ac98-9c1af8c39470 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:49.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8082" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:09:43.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul  6 19:09:43.993: INFO: Waiting up to 5m0s for pod "downward-api-4286e02c-1ce6-4ce8-9141-4b9ade6f4252" in namespace "downward-api-6806" to be "Succeeded or Failed"
Jul  6 19:09:44.024: INFO: Pod "downward-api-4286e02c-1ce6-4ce8-9141-4b9ade6f4252": Phase="Pending", Reason="", readiness=false. Elapsed: 31.008088ms
Jul  6 19:09:46.056: INFO: Pod "downward-api-4286e02c-1ce6-4ce8-9141-4b9ade6f4252": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062682059s
Jul  6 19:09:48.088: INFO: Pod "downward-api-4286e02c-1ce6-4ce8-9141-4b9ade6f4252": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094511152s
Jul  6 19:09:50.120: INFO: Pod "downward-api-4286e02c-1ce6-4ce8-9141-4b9ade6f4252": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126972024s
STEP: Saw pod success
Jul  6 19:09:50.120: INFO: Pod "downward-api-4286e02c-1ce6-4ce8-9141-4b9ade6f4252" satisfied condition "Succeeded or Failed"
Jul  6 19:09:50.152: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod downward-api-4286e02c-1ce6-4ce8-9141-4b9ade6f4252 container dapi-container: <nil>
STEP: delete the pod
Jul  6 19:09:50.230: INFO: Waiting for pod downward-api-4286e02c-1ce6-4ce8-9141-4b9ade6f4252 to disappear
Jul  6 19:09:50.261: INFO: Pod downward-api-4286e02c-1ce6-4ce8-9141-4b9ade6f4252 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.525 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":75,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:50.341: INFO: Only supported for providers [openstack] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:09:49.565: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6642a73-7986-4c78-abd7-21f59eb75069" in namespace "projected-9361" to be "Succeeded or Failed"
Jul  6 19:09:49.596: INFO: Pod "downwardapi-volume-b6642a73-7986-4c78-abd7-21f59eb75069": Phase="Pending", Reason="", readiness=false. Elapsed: 30.48397ms
Jul  6 19:09:51.628: INFO: Pod "downwardapi-volume-b6642a73-7986-4c78-abd7-21f59eb75069": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062118254s
STEP: Saw pod success
Jul  6 19:09:51.628: INFO: Pod "downwardapi-volume-b6642a73-7986-4c78-abd7-21f59eb75069" satisfied condition "Succeeded or Failed"
Jul  6 19:09:51.659: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod downwardapi-volume-b6642a73-7986-4c78-abd7-21f59eb75069 container client-container: <nil>
STEP: delete the pod
Jul  6 19:09:51.736: INFO: Waiting for pod downwardapi-volume-b6642a73-7986-4c78-abd7-21f59eb75069 to disappear
Jul  6 19:09:51.772: INFO: Pod downwardapi-volume-b6642a73-7986-4c78-abd7-21f59eb75069 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:09:51.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9361" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":93,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
I0706 19:07:30.358494   12583 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-4671, replica count: 2
I0706 19:07:33.410564   12583 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0706 19:07:36.410897   12583 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  6 19:07:36.410: INFO: Creating new exec pod
Jul  6 19:07:41.568: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:07:47.024: INFO: rc: 1
Jul  6 19:07:47.024: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:07:48.025: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:07:53.450: INFO: rc: 1
Jul  6 19:07:53.451: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 nodeport-update-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:07:54.025: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:07:59.461: INFO: rc: 1
Jul  6 19:07:59.461: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:00.025: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:08:05.495: INFO: rc: 1
Jul  6 19:08:05.495: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:06.025: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:08:11.442: INFO: rc: 1
Jul  6 19:08:11.442: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:12.025: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:08:17.460: INFO: rc: 1
Jul  6 19:08:17.460: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:18.024: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:08:23.457: INFO: rc: 1
Jul  6 19:08:23.457: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:24.024: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:08:29.455: INFO: rc: 1
Jul  6 19:08:29.455: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:30.024: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:08:35.474: INFO: rc: 1
Jul  6 19:08:35.474: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:36.025: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:08:41.454: INFO: rc: 1
Jul  6 19:08:41.454: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:42.024: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:08:47.446: INFO: rc: 1
Jul  6 19:08:47.446: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:48.025: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:08:53.448: INFO: rc: 1
Jul  6 19:08:53.448: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:54.024: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:08:59.461: INFO: rc: 1
Jul  6 19:08:59.461: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:00.024: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:09:05.490: INFO: rc: 1
Jul  6 19:09:05.490: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 nodeport-update-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:06.024: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:09:11.459: INFO: rc: 1
Jul  6 19:09:11.459: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:12.025: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:09:17.491: INFO: rc: 1
Jul  6 19:09:17.491: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 nodeport-update-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:18.025: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:09:23.446: INFO: rc: 1
Jul  6 19:09:23.446: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:24.025: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:09:29.466: INFO: rc: 1
Jul  6 19:09:29.466: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:30.024: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:09:35.460: INFO: rc: 1
Jul  6 19:09:35.460: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:36.024: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:09:41.512: INFO: rc: 1
Jul  6 19:09:41.512: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:42.025: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:09:47.499: INFO: rc: 1
Jul  6 19:09:47.499: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:47.499: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  6 19:09:52.994: INFO: rc: 1
Jul  6 19:09:52.994: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4671 exec execpodrcfvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:52.995: FAIL: Unexpected error:
    <*errors.errorString | 0xc003aae5a0>: {
        s: "service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol
occurred

... skipping 275 lines ...
• Failure [144.794 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1229

  Jul  6 19:09:52.995: Unexpected error:
      <*errors.errorString | 0xc003aae5a0>: {
          s: "service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1263
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":1,"skipped":3,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:54.880: INFO: Only supported for providers [gce gke] (not aws)
... skipping 112 lines ...
• [SLOW TEST:13.589 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":6,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:09:59.088: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 87 lines ...
Jul  6 19:09:57.028: INFO: PersistentVolumeClaim pvc-fkz6n found but phase is Pending instead of Bound.
Jul  6 19:09:59.060: INFO: PersistentVolumeClaim pvc-fkz6n found and phase=Bound (12.221247626s)
Jul  6 19:09:59.060: INFO: Waiting up to 3m0s for PersistentVolume local-lvbhk to have phase Bound
Jul  6 19:09:59.092: INFO: PersistentVolume local-lvbhk found and phase=Bound (31.749897ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-892j
STEP: Creating a pod to test subpath
Jul  6 19:09:59.186: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-892j" in namespace "provisioning-8426" to be "Succeeded or Failed"
Jul  6 19:09:59.217: INFO: Pod "pod-subpath-test-preprovisionedpv-892j": Phase="Pending", Reason="", readiness=false. Elapsed: 30.83911ms
Jul  6 19:10:01.248: INFO: Pod "pod-subpath-test-preprovisionedpv-892j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062396251s
Jul  6 19:10:03.280: INFO: Pod "pod-subpath-test-preprovisionedpv-892j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094194168s
STEP: Saw pod success
Jul  6 19:10:03.280: INFO: Pod "pod-subpath-test-preprovisionedpv-892j" satisfied condition "Succeeded or Failed"
Jul  6 19:10:03.311: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-892j container test-container-subpath-preprovisionedpv-892j: <nil>
STEP: delete the pod
Jul  6 19:10:03.386: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-892j to disappear
Jul  6 19:10:03.417: INFO: Pod pod-subpath-test-preprovisionedpv-892j no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-892j
Jul  6 19:10:03.417: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-892j" in namespace "provisioning-8426"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":93,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
Jul  6 19:09:20.984: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:09:31.147: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:09:41.259: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:09:51.349: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:10:01.414: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:10:01.414: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0002c8240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "webhook-2556".
STEP: Found 8 events.
Jul  6 19:10:01.446: INFO: At 2021-07-06 19:09:05 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-78988fc6cd to 1
Jul  6 19:10:01.446: INFO: At 2021-07-06 19:09:05 +0000 UTC - event for sample-webhook-deployment-78988fc6cd: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-78988fc6cd-5xjcb
Jul  6 19:10:01.446: INFO: At 2021-07-06 19:09:05 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5xjcb: {default-scheduler } Scheduled: Successfully assigned webhook-2556/sample-webhook-deployment-78988fc6cd-5xjcb to ip-172-20-61-241.ca-central-1.compute.internal
Jul  6 19:10:01.446: INFO: At 2021-07-06 19:09:06 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5xjcb: {kubelet ip-172-20-61-241.ca-central-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-fp4pm" : failed to sync configmap cache: timed out waiting for the condition
Jul  6 19:10:01.446: INFO: At 2021-07-06 19:09:06 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5xjcb: {kubelet ip-172-20-61-241.ca-central-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
Jul  6 19:10:01.446: INFO: At 2021-07-06 19:09:07 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5xjcb: {kubelet ip-172-20-61-241.ca-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul  6 19:10:01.446: INFO: At 2021-07-06 19:09:07 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5xjcb: {kubelet ip-172-20-61-241.ca-central-1.compute.internal} Created: Created container sample-webhook
Jul  6 19:10:01.446: INFO: At 2021-07-06 19:09:07 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5xjcb: {kubelet ip-172-20-61-241.ca-central-1.compute.internal} Started: Started container sample-webhook
Jul  6 19:10:01.477: INFO: POD                                         NODE                                            PHASE    GRACE  CONDITIONS
Jul  6 19:10:01.477: INFO: sample-webhook-deployment-78988fc6cd-5xjcb  ip-172-20-61-241.ca-central-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:09:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:09:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:09:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:09:05 +0000 UTC  }]
Jul  6 19:10:01.477: INFO: 
... skipping 454 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:10:01.414: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0002c8240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

... skipping 471 lines ...
• [SLOW TEST:13.136 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":2,"skipped":13,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}

SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":5,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:09:43.152: INFO: >>> kubeConfig: /root/.kube/config
... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:15.462: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":10,"skipped":95,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:10:05.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
• [SLOW TEST:10.662 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":11,"skipped":95,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:16.021: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 74 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 212 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":13,"skipped":127,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:10:17.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv-protection
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
Jul  6 19:10:17.872: INFO: AfterEach: Cleaning up test resources.
Jul  6 19:10:17.872: INFO: Deleting PersistentVolumeClaim "pvc-9wdw2"
Jul  6 19:10:17.902: INFO: Deleting PersistentVolume "hostpath-kjf5d"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":3,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:17.943: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 53 lines ...
      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":5,"skipped":63,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:10:04.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pvc-protection
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
• [SLOW TEST:16.754 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":6,"skipped":63,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:21.764: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Jul  6 19:10:17.540: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6384" to be "Succeeded or Failed"
Jul  6 19:10:17.572: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 32.135886ms
Jul  6 19:10:19.604: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063994466s
Jul  6 19:10:21.637: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096695717s
STEP: Saw pod success
Jul  6 19:10:21.637: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul  6 19:10:21.668: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Jul  6 19:10:21.741: INFO: Waiting for pod pod-host-path-test to disappear
Jul  6 19:10:21.772: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:10:21.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6384" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":14,"skipped":131,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:21.847: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 27 lines ...
Jul  6 19:09:30.295: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Jul  6 19:09:30.738: INFO: Successfully created a new PD: "aws://ca-central-1a/vol-0702a93985b175b74".
Jul  6 19:09:30.738: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-fq6t
STEP: Creating a pod to test exec-volume-test
Jul  6 19:09:30.781: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-fq6t" in namespace "volume-5537" to be "Succeeded or Failed"
Jul  6 19:09:30.813: INFO: Pod "exec-volume-test-inlinevolume-fq6t": Phase="Pending", Reason="", readiness=false. Elapsed: 31.45044ms
Jul  6 19:09:32.844: INFO: Pod "exec-volume-test-inlinevolume-fq6t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062598513s
Jul  6 19:09:34.879: INFO: Pod "exec-volume-test-inlinevolume-fq6t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097454526s
Jul  6 19:09:36.910: INFO: Pod "exec-volume-test-inlinevolume-fq6t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129130618s
Jul  6 19:09:38.943: INFO: Pod "exec-volume-test-inlinevolume-fq6t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161647771s
Jul  6 19:09:40.974: INFO: Pod "exec-volume-test-inlinevolume-fq6t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.192999916s
... skipping 8 lines ...
Jul  6 19:09:59.259: INFO: Pod "exec-volume-test-inlinevolume-fq6t": Phase="Pending", Reason="", readiness=false. Elapsed: 28.477760749s
Jul  6 19:10:01.295: INFO: Pod "exec-volume-test-inlinevolume-fq6t": Phase="Pending", Reason="", readiness=false. Elapsed: 30.513605834s
Jul  6 19:10:03.326: INFO: Pod "exec-volume-test-inlinevolume-fq6t": Phase="Pending", Reason="", readiness=false. Elapsed: 32.544527735s
Jul  6 19:10:05.359: INFO: Pod "exec-volume-test-inlinevolume-fq6t": Phase="Pending", Reason="", readiness=false. Elapsed: 34.577441656s
Jul  6 19:10:07.392: INFO: Pod "exec-volume-test-inlinevolume-fq6t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.610600624s
STEP: Saw pod success
Jul  6 19:10:07.392: INFO: Pod "exec-volume-test-inlinevolume-fq6t" satisfied condition "Succeeded or Failed"
Jul  6 19:10:07.423: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod exec-volume-test-inlinevolume-fq6t container exec-container-inlinevolume-fq6t: <nil>
STEP: delete the pod
Jul  6 19:10:07.495: INFO: Waiting for pod exec-volume-test-inlinevolume-fq6t to disappear
Jul  6 19:10:07.526: INFO: Pod exec-volume-test-inlinevolume-fq6t no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-fq6t
Jul  6 19:10:07.526: INFO: Deleting pod "exec-volume-test-inlinevolume-fq6t" in namespace "volume-5537"
Jul  6 19:10:07.710: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0702a93985b175b74", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0702a93985b175b74 is currently attached to i-005381dcd763db5a9
	status code: 400, request id: b88612ad-e792-4722-ab6f-bf86132196c6
Jul  6 19:10:12.979: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0702a93985b175b74", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0702a93985b175b74 is currently attached to i-005381dcd763db5a9
	status code: 400, request id: db7451fe-764a-448f-bd7f-6529a7556e18
Jul  6 19:10:18.241: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0702a93985b175b74", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0702a93985b175b74 is currently attached to i-005381dcd763db5a9
	status code: 400, request id: 81cab1f1-ecc7-472a-8736-fe94c4135d50
Jul  6 19:10:23.510: INFO: Successfully deleted PD "aws://ca-central-1a/vol-0702a93985b175b74".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:10:23.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5537" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":12,"skipped":102,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:10:23.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":13,"skipped":109,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:23.744: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 253 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should implement legacy replacement when the update strategy is OnDelete
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:503
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":4,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:26.363: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":8,"skipped":67,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:27.034: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 46 lines ...
Jul  6 19:10:23.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Jul  6 19:10:23.965: INFO: Waiting up to 5m0s for pod "client-containers-9c66e624-7b36-4ae4-b9e8-bf3b65192f2f" in namespace "containers-3521" to be "Succeeded or Failed"
Jul  6 19:10:23.996: INFO: Pod "client-containers-9c66e624-7b36-4ae4-b9e8-bf3b65192f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.303326ms
Jul  6 19:10:26.027: INFO: Pod "client-containers-9c66e624-7b36-4ae4-b9e8-bf3b65192f2f": Phase="Running", Reason="", readiness=true. Elapsed: 2.062565645s
Jul  6 19:10:28.059: INFO: Pod "client-containers-9c66e624-7b36-4ae4-b9e8-bf3b65192f2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094626914s
STEP: Saw pod success
Jul  6 19:10:28.059: INFO: Pod "client-containers-9c66e624-7b36-4ae4-b9e8-bf3b65192f2f" satisfied condition "Succeeded or Failed"
Jul  6 19:10:28.091: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod client-containers-9c66e624-7b36-4ae4-b9e8-bf3b65192f2f container agnhost-container: <nil>
STEP: delete the pod
Jul  6 19:10:28.166: INFO: Waiting for pod client-containers-9c66e624-7b36-4ae4-b9e8-bf3b65192f2f to disappear
Jul  6 19:10:28.199: INFO: Pod client-containers-9c66e624-7b36-4ae4-b9e8-bf3b65192f2f no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 10 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Jul  6 19:10:27.277: INFO: Waiting up to 5m0s for pod "pod-e6a73176-4782-484c-8e6c-d90bd2a2ac2f" in namespace "emptydir-524" to be "Succeeded or Failed"
Jul  6 19:10:27.308: INFO: Pod "pod-e6a73176-4782-484c-8e6c-d90bd2a2ac2f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.921323ms
Jul  6 19:10:29.339: INFO: Pod "pod-e6a73176-4782-484c-8e6c-d90bd2a2ac2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062364913s
Jul  6 19:10:31.371: INFO: Pod "pod-e6a73176-4782-484c-8e6c-d90bd2a2ac2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0943796s
STEP: Saw pod success
Jul  6 19:10:31.371: INFO: Pod "pod-e6a73176-4782-484c-8e6c-d90bd2a2ac2f" satisfied condition "Succeeded or Failed"
Jul  6 19:10:31.405: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-e6a73176-4782-484c-8e6c-d90bd2a2ac2f container test-container: <nil>
STEP: delete the pod
Jul  6 19:10:31.483: INFO: Waiting for pod pod-e6a73176-4782-484c-8e6c-d90bd2a2ac2f to disappear
Jul  6 19:10:31.514: INFO: Pod pod-e6a73176-4782-484c-8e6c-d90bd2a2ac2f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:10:31.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-524" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":9,"skipped":78,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:10:26.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-9893/configmap-test-736537de-b71d-48c8-bf72-d0febdcf15c8
STEP: Creating a pod to test consume configMaps
Jul  6 19:10:26.499: INFO: Waiting up to 5m0s for pod "pod-configmaps-d73172b3-ed2d-4ce6-916d-e37a4590ce94" in namespace "configmap-9893" to be "Succeeded or Failed"
Jul  6 19:10:26.529: INFO: Pod "pod-configmaps-d73172b3-ed2d-4ce6-916d-e37a4590ce94": Phase="Pending", Reason="", readiness=false. Elapsed: 30.481655ms
Jul  6 19:10:28.561: INFO: Pod "pod-configmaps-d73172b3-ed2d-4ce6-916d-e37a4590ce94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06176962s
Jul  6 19:10:30.593: INFO: Pod "pod-configmaps-d73172b3-ed2d-4ce6-916d-e37a4590ce94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093797531s
Jul  6 19:10:32.624: INFO: Pod "pod-configmaps-d73172b3-ed2d-4ce6-916d-e37a4590ce94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124775761s
STEP: Saw pod success
Jul  6 19:10:32.624: INFO: Pod "pod-configmaps-d73172b3-ed2d-4ce6-916d-e37a4590ce94" satisfied condition "Succeeded or Failed"
Jul  6 19:10:32.655: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-configmaps-d73172b3-ed2d-4ce6-916d-e37a4590ce94 container env-test: <nil>
STEP: delete the pod
Jul  6 19:10:32.726: INFO: Waiting for pod pod-configmaps-d73172b3-ed2d-4ce6-916d-e37a4590ce94 to disappear
Jul  6 19:10:32.757: INFO: Pod pod-configmaps-d73172b3-ed2d-4ce6-916d-e37a4590ce94 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.548 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":73,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":116,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:10:28.281: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul  6 19:10:28.443: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 19:10:28.474: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-nvt4
STEP: Creating a pod to test subpath
Jul  6 19:10:28.513: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-nvt4" in namespace "provisioning-2237" to be "Succeeded or Failed"
Jul  6 19:10:28.544: INFO: Pod "pod-subpath-test-inlinevolume-nvt4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.872914ms
Jul  6 19:10:30.576: INFO: Pod "pod-subpath-test-inlinevolume-nvt4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062939772s
Jul  6 19:10:32.609: INFO: Pod "pod-subpath-test-inlinevolume-nvt4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096201269s
Jul  6 19:10:34.641: INFO: Pod "pod-subpath-test-inlinevolume-nvt4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128214714s
STEP: Saw pod success
Jul  6 19:10:34.641: INFO: Pod "pod-subpath-test-inlinevolume-nvt4" satisfied condition "Succeeded or Failed"
Jul  6 19:10:34.675: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-nvt4 container test-container-volume-inlinevolume-nvt4: <nil>
STEP: delete the pod
Jul  6 19:10:34.745: INFO: Waiting for pod pod-subpath-test-inlinevolume-nvt4 to disappear
Jul  6 19:10:34.779: INFO: Pod pod-subpath-test-inlinevolume-nvt4 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-nvt4
Jul  6 19:10:34.779: INFO: Deleting pod "pod-subpath-test-inlinevolume-nvt4" in namespace "provisioning-2237"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":15,"skipped":116,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 9 lines ...
Jul  6 19:10:04.117: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-14812q4jn
STEP: creating a claim
Jul  6 19:10:04.151: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-56fr
STEP: Creating a pod to test exec-volume-test
Jul  6 19:10:04.248: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-56fr" in namespace "volume-1481" to be "Succeeded or Failed"
Jul  6 19:10:04.278: INFO: Pod "exec-volume-test-dynamicpv-56fr": Phase="Pending", Reason="", readiness=false. Elapsed: 30.268611ms
Jul  6 19:10:06.311: INFO: Pod "exec-volume-test-dynamicpv-56fr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063728518s
Jul  6 19:10:08.343: INFO: Pod "exec-volume-test-dynamicpv-56fr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095242553s
Jul  6 19:10:10.374: INFO: Pod "exec-volume-test-dynamicpv-56fr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126369826s
Jul  6 19:10:12.405: INFO: Pod "exec-volume-test-dynamicpv-56fr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157733324s
Jul  6 19:10:14.440: INFO: Pod "exec-volume-test-dynamicpv-56fr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.192147624s
STEP: Saw pod success
Jul  6 19:10:14.440: INFO: Pod "exec-volume-test-dynamicpv-56fr" satisfied condition "Succeeded or Failed"
Jul  6 19:10:14.473: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod exec-volume-test-dynamicpv-56fr container exec-container-dynamicpv-56fr: <nil>
STEP: delete the pod
Jul  6 19:10:14.553: INFO: Waiting for pod exec-volume-test-dynamicpv-56fr to disappear
Jul  6 19:10:14.589: INFO: Pod exec-volume-test-dynamicpv-56fr no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-56fr
Jul  6 19:10:14.589: INFO: Deleting pod "exec-volume-test-dynamicpv-56fr" in namespace "volume-1481"
... skipping 71 lines ...
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4305 to expose endpoints map[pod1:[80]]
Jul  6 19:08:26.268: INFO: successfully validated that service endpoint-test2 in namespace services-4305 exposes endpoints map[pod1:[80]]
STEP: Checking if the Service forwards traffic to pod1
Jul  6 19:08:26.268: INFO: Creating new exec pod
Jul  6 19:08:29.365: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:08:31.793: INFO: rc: 1
Jul  6 19:08:31.794: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ nc -v -t -w 2 endpoint-test2 80
+ echo hostName
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:32.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:08:38.229: INFO: rc: 1
Jul  6 19:08:38.229: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:38.795: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:08:41.231: INFO: rc: 1
Jul  6 19:08:41.231: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:41.795: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:08:47.240: INFO: rc: 1
Jul  6 19:08:47.240: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ nc -v -t -w 2 endpoint-test2 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:47.795: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:08:50.223: INFO: rc: 1
Jul  6 19:08:50.223: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:50.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:08:56.290: INFO: rc: 1
Jul  6 19:08:56.290: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:56.795: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:08:59.271: INFO: rc: 1
Jul  6 19:08:59.271: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:08:59.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:05.277: INFO: rc: 1
Jul  6 19:09:05.277: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ + echonc hostName
 -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:05.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:11.218: INFO: rc: 1
Jul  6 19:09:11.218: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:11.795: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:14.269: INFO: rc: 1
Jul  6 19:09:14.269: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:14.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:20.230: INFO: rc: 1
Jul  6 19:09:20.230: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:20.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:26.232: INFO: rc: 1
Jul  6 19:09:26.232: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:26.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:29.220: INFO: rc: 1
Jul  6 19:09:29.220: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:29.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:35.290: INFO: rc: 1
Jul  6 19:09:35.290: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:35.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:41.264: INFO: rc: 1
Jul  6 19:09:41.264: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ nc -v -t -w 2 endpoint-test2 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:41.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:47.247: INFO: rc: 1
Jul  6 19:09:47.247: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:47.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:50.225: INFO: rc: 1
Jul  6 19:09:50.225: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ nc -v -t -w 2 endpoint-test2 80
+ echo hostName
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:50.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:53.226: INFO: rc: 1
Jul  6 19:09:53.226: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ nc -v -t -w 2 endpoint-test2 80
+ echo hostName
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:53.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:09:59.235: INFO: rc: 1
Jul  6 19:09:59.235: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ nc -v -t -w 2 endpoint-test2 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:09:59.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:10:02.306: INFO: rc: 1
Jul  6 19:10:02.306: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:10:02.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:10:08.401: INFO: rc: 1
Jul  6 19:10:08.401: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:10:08.795: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:10:11.234: INFO: rc: 1
Jul  6 19:10:11.234: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:10:11.795: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:10:14.223: INFO: rc: 1
Jul  6 19:10:14.223: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:10:14.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:10:17.284: INFO: rc: 1
Jul  6 19:10:17.284: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:10:17.795: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:10:23.330: INFO: rc: 1
Jul  6 19:10:23.330: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:10:23.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:10:26.252: INFO: rc: 1
Jul  6 19:10:26.252: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:10:26.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:10:29.403: INFO: rc: 1
Jul  6 19:10:29.404: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:10:29.794: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:10:35.257: INFO: rc: 1
Jul  6 19:10:35.257: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:10:35.257: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul  6 19:10:37.714: INFO: rc: 1
Jul  6 19:10:37.715: INFO: Service reachability failing with error: error running /tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4305 exec execpod8qf69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 19:10:37.715: FAIL: Unexpected error:
    <*errors.errorString | 0xc0027403b0>: {
        s: "service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol
occurred

... skipping 243 lines ...
• Failure [140.065 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:10:37.715: Unexpected error:
      <*errors.errorString | 0xc0027403b0>: {
          s: "service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:812
------------------------------
{"msg":"FAILED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":3,"skipped":31,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:39.797: INFO: Only supported for providers [azure] (not aws)
... skipping 14 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":6,"skipped":40,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:10:38.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
• [SLOW TEST:6.894 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:14.941 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":16,"skipped":118,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:49.891: INFO: Only supported for providers [gce gke] (not aws)
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:10:51.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9528" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":4,"skipped":36,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

SS
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":17,"skipped":142,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:51.445: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:10:54.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7433" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":144,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:54.474: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 114 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:56.108: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 148 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:10:58.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5346" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":3,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:58.254: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:10:58.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8607" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname","total":-1,"completed":19,"skipped":150,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:10:58.833: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
• [SLOW TEST:16.929 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":8,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Jul  6 19:10:58.480: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4348" to be "Succeeded or Failed"
Jul  6 19:10:58.511: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 31.404637ms
Jul  6 19:11:00.543: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063119693s
Jul  6 19:11:02.576: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096071279s
STEP: Saw pod success
Jul  6 19:11:02.576: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul  6 19:11:02.623: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Jul  6 19:11:02.704: INFO: Waiting for pod pod-host-path-test to disappear
Jul  6 19:11:02.737: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:11:02.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-4348" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":30,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:02.842: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 163 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":69,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-1f84b3d1-0daf-4359-8ab9-4830e94d9304
STEP: Creating secret with name secret-projected-all-test-volume-782dcbd2-08d4-4952-b262-112e6f87d520
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul  6 19:11:02.286: INFO: Waiting up to 5m0s for pod "projected-volume-71d011be-0fb7-4d79-b511-7ceb12923f9c" in namespace "projected-3377" to be "Succeeded or Failed"
Jul  6 19:11:02.318: INFO: Pod "projected-volume-71d011be-0fb7-4d79-b511-7ceb12923f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.999512ms
Jul  6 19:11:04.350: INFO: Pod "projected-volume-71d011be-0fb7-4d79-b511-7ceb12923f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063867829s
Jul  6 19:11:06.382: INFO: Pod "projected-volume-71d011be-0fb7-4d79-b511-7ceb12923f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096145957s
Jul  6 19:11:08.413: INFO: Pod "projected-volume-71d011be-0fb7-4d79-b511-7ceb12923f9c": Phase="Running", Reason="", readiness=true. Elapsed: 6.127689577s
Jul  6 19:11:10.446: INFO: Pod "projected-volume-71d011be-0fb7-4d79-b511-7ceb12923f9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.160228689s
STEP: Saw pod success
Jul  6 19:11:10.446: INFO: Pod "projected-volume-71d011be-0fb7-4d79-b511-7ceb12923f9c" satisfied condition "Succeeded or Failed"
Jul  6 19:11:10.477: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod projected-volume-71d011be-0fb7-4d79-b511-7ceb12923f9c container projected-all-volume-test: <nil>
STEP: delete the pod
Jul  6 19:11:10.545: INFO: Waiting for pod projected-volume-71d011be-0fb7-4d79-b511-7ceb12923f9c to disappear
Jul  6 19:11:10.577: INFO: Pod projected-volume-71d011be-0fb7-4d79-b511-7ceb12923f9c no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.616 seconds]
[sig-storage] Projected combined
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:10.653: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:11:10.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6180" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":10,"skipped":59,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:11.026: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 105 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":94,"failed":0}
[BeforeEach] [sig-node] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:10:35.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename node-lease-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 57 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when the NodeLease feature is enabled
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49
    the kubelet should report node status infrequently
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":12,"skipped":94,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:15.470: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 33 lines ...
Jul  6 19:10:11.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  6 19:10:13.436: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  6 19:10:15.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761195409, loc:(*time.Location)(0x9f895a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  6 19:11:17.732: INFO: Waited 1m0.263975776s for the sample-apiserver to be ready to handle requests.
Jul  6 19:11:17.732: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"dbde50b6-6aeb-4f18-ad25-0e0743da9c2e","resourceVersion":"12131","creationTimestamp":"2021-07-06T19:10:17Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2021-07-06T19:10:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2021-07-06T19:10:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"}]},"spec":{"service":{"namespace":"aggregator-6355","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpFd056QTJNVGt4TURBNFdoY05NekV3TnpBME1Ua3hNREE0V2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUUM4QURHTVZjTFBTaHh3UUszeGo5UFY0SlR3Q0J0UHlaZzViaGpPY3RVYnp1RDMKeXJ6SDc1ZFJIU20xTHNZd3FzYlMxd3hEdEhmUnk3MFgreURhRStIdFRJUWgvN1krTkV1UExhSlFBY1NDZGNHLwo5T3Npb0tValpCODIvSkhjS2xtYklyaTkyNFVQYklncVcyak5uL3F4bEUxUnoyN29PY01lRXlZT0RSYStwZ2JPCitHeHlYaHJiZnJsMG9qSkkzeFFhMjNaU1VUQTVDa2FSRkQ3ZXhCQzNZbkJUMU1NcnNWT2hZWnBNaXZpNzFZV0gKbEVjS3NmcDI3T2kxOGhwKzNLa2tmZCtuWUppK01KcWxqSnJ3Z2tGcEdraGxtMm84Q1dBOVJmQ1dJRGZGaEJ5ZgpBZGh4ZTkwSWszVWpWaUdGZFN0UE42WHFCeUQvemVUcnBSelFCSWpYQWdNQkFBR2pZVEJmTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJROHdVOGFwRDJleklqSzYyTFAKNmZqOFlQYXdpakFkQmdOVkhSRUVGakFVZ2hKbE1tVXRjMlZ5ZG1WeUxXTmxjblF0WTJFd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBREtvVzJ0NjJQRVlqZGpsQVJvNEhUaXcvcjA4Y3hjS3lEczRnbG5vWWxPTXJEOUx2a0ZuClVUTUtVb2llWGNwUU95bXpPQzAzYUNLS1dZVHgwRlFhdXZDbGNFVmtBQ0ZqOGJrUXltSmVXNDVWczc5S3NydFMKNjYwSG5DaVd4aFZlamx2bVZWak16anVQRU54K2pxdXVlZkxjVG8yNXV0ay9LUElTVFBWRFR4L01ZbVI1bWtEcQpZcTA0NFVJeHhwMGl2NVZ2eE12eGJ6SXZ4UzJIbEVyMWJFbDVtNmk2UVJqbjM2T2xCVldPbklXSEVqM0F1WjZ2CmhwZkIxZkl4VGM2RWdEZTc5elBrMzhOV082N0IwVmNWSktxOFV2QzV5VVlGQWoyeWdXcGRPcEI1bE1JNmNrckEKb2ZYUFZTSVRsMk9lbVdvRldjeXVsUjZYTnBpeVBqaWMxcDg9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2021-07-06T19:10:17Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://100.64.217.179:7443/apis/wardle.example.com/v1alpha1: Get \"https://100.64.217.179:7443/apis/wardle.example.com/v1alpha1\": context deadline exceeded"}]}}
Jul  6 19:11:17.732: INFO: current pods: {"metadata":{"resourceVersion":"12186"},"items":[{"metadata":{"name":"sample-apiserver-deployment-64f6b9dc99-skt27","generateName":"sample-apiserver-deployment-64f6b9dc99-","namespace":"aggregator-6355","uid":"45f2d2ef-8df6-49dc-913b-6fe2ed06de56","resourceVersion":"10651","creationTimestamp":"2021-07-06T19:10:09Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"64f6b9dc99"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-64f6b9dc99","uid":"92ed4d1c-b2c1-4527-bf94-f03e88594c26","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-07-06T19:10:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ed4d1c-b2c1-4527-bf94-f03e88594c26\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-07-06T19:10:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.54\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-fwgp2","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-fwgp2","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.13-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-fwgp2","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-61-17.ca-central-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-06T19:10:09Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-06T19:10:17Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-06T19:10:17Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-06T19:10:09Z"}],"hostIP":"172.20.61.17","podIP":"100.96.4.54","podIPs":[{"ip":"100.96.4.54"}],"startTime":"2021-07-06T19:10:09Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2021-07-06T19:10:16Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.13-0","imageID":"k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2","containerID":"containerd://c2b78cb39a135e77c965df5a852e23987566c96641d1c02587de03eec8d57d43","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2021-07-06T19:10:11Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276","containerID":"containerd://8f8626c66a24fde08c4776ce3edfda6f91d2bc3df06b3efe083115c82910e515","started":true}],"qosClass":"BestEffort"}}]}
Jul  6 19:11:17.773: INFO: logs of sample-apiserver-deployment-64f6b9dc99-skt27/sample-apiserver (error: <nil>): W0706 19:10:12.115473       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0706 19:10:12.115557       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
I0706 19:10:12.135138       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
I0706 19:10:12.135353       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
I0706 19:10:12.137106       1 client.go:361] parsed scheme: "endpoint"
I0706 19:10:12.137142       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0706 19:10:12.137521       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0706 19:10:12.824935       1 client.go:361] parsed scheme: "endpoint"
I0706 19:10:12.825026       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0706 19:10:12.825386       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0706 19:10:13.137933       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0706 19:10:13.825842       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0706 19:10:14.442132       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0706 19:10:15.704702       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0706 19:10:17.670232       1 client.go:361] parsed scheme: "endpoint"
I0706 19:10:17.670271       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0706 19:10:17.671375       1 client.go:361] parsed scheme: "endpoint"
I0706 19:10:17.671400       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0706 19:10:17.672878       1 client.go:361] parsed scheme: "endpoint"
I0706 19:10:17.672901       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
... skipping 4 lines ...
I0706 19:10:17.716935       1 secure_serving.go:178] Serving securely on [::]:443
I0706 19:10:17.717337       1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key
I0706 19:10:17.717368       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0706 19:10:17.816685       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0706 19:10:17.816885       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

Jul  6 19:11:17.812: INFO: logs of sample-apiserver-deployment-64f6b9dc99-skt27/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2021-07-06 19:10:16.757708 I | etcdmain: etcd Version: 3.4.13
2021-07-06 19:10:16.757748 I | etcdmain: Git SHA: ae9734ed2
2021-07-06 19:10:16.757752 I | etcdmain: Go Version: go1.12.17
2021-07-06 19:10:16.757755 I | etcdmain: Go OS/Arch: linux/amd64
2021-07-06 19:10:16.757759 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2021-07-06 19:10:16.757771 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
2021-07-06 19:10:17.668889 N | etcdserver/membership: set the initial cluster version to 3.4
2021-07-06 19:10:17.668940 I | etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379]} to cluster cdf818194e3a8c32
2021-07-06 19:10:17.668946 I | etcdserver/api: enabled capabilities for version 3.4
2021-07-06 19:10:17.668959 I | embed: ready to serve client requests
2021-07-06 19:10:17.669692 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!

Jul  6 19:11:17.812: FAIL: gave up waiting for apiservice wardle to come up successfully
Unexpected error:
    <*errors.errorString | 0xc000240240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 282 lines ...
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:11:17.812: gave up waiting for apiservice wardle to come up successfully
  Unexpected error:
      <*errors.errorString | 0xc000240240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

... skipping 25 lines ...
STEP: Creating a mutating webhook configuration
Jul  6 19:10:35.990: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:10:46.166: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:10:56.260: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:11:06.355: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:11:16.423: INFO: Waiting for webhook configuration to be ready...
Jul  6 19:11:16.424: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000340240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 525 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:11:16.424: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000340240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":11,"skipped":111,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:20.358: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":2,"skipped":16,"failed":2,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:11:20.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul  6 19:11:20.512: INFO: Waiting up to 5m0s for pod "downward-api-17ec15a5-1b43-4a5b-8316-01106618a278" in namespace "downward-api-2726" to be "Succeeded or Failed"
Jul  6 19:11:20.551: INFO: Pod "downward-api-17ec15a5-1b43-4a5b-8316-01106618a278": Phase="Pending", Reason="", readiness=false. Elapsed: 38.889208ms
Jul  6 19:11:22.583: INFO: Pod "downward-api-17ec15a5-1b43-4a5b-8316-01106618a278": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.071039733s
STEP: Saw pod success
Jul  6 19:11:22.583: INFO: Pod "downward-api-17ec15a5-1b43-4a5b-8316-01106618a278" satisfied condition "Succeeded or Failed"
Jul  6 19:11:22.614: INFO: Trying to get logs from node ip-172-20-61-241.ca-central-1.compute.internal pod downward-api-17ec15a5-1b43-4a5b-8316-01106618a278 container dapi-container: <nil>
STEP: delete the pod
Jul  6 19:11:22.681: INFO: Waiting for pod downward-api-17ec15a5-1b43-4a5b-8316-01106618a278 to disappear
Jul  6 19:11:22.712: INFO: Pod downward-api-17ec15a5-1b43-4a5b-8316-01106618a278 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:11:22.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2726" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":2,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:22.800: INFO: Only supported for providers [vsphere] (not aws)
... skipping 60 lines ...
• [SLOW TEST:7.850 seconds]
[sig-scheduling] LimitRange
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":13,"skipped":99,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:23.349: INFO: Only supported for providers [gce gke] (not aws)
... skipping 110 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:11:25.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1294" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":4,"skipped":23,"failed":2,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:25.547: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 38 lines ...
Jul  6 19:11:27.138: INFO: PersistentVolumeClaim pvc-9jrqx found but phase is Pending instead of Bound.
Jul  6 19:11:29.171: INFO: PersistentVolumeClaim pvc-9jrqx found and phase=Bound (6.130112284s)
Jul  6 19:11:29.171: INFO: Waiting up to 3m0s for PersistentVolume local-722ng to have phase Bound
Jul  6 19:11:29.202: INFO: PersistentVolume local-722ng found and phase=Bound (30.622664ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-hm9q
STEP: Creating a pod to test exec-volume-test
Jul  6 19:11:29.306: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-hm9q" in namespace "volume-2146" to be "Succeeded or Failed"
Jul  6 19:11:29.337: INFO: Pod "exec-volume-test-preprovisionedpv-hm9q": Phase="Pending", Reason="", readiness=false. Elapsed: 30.608921ms
Jul  6 19:11:31.368: INFO: Pod "exec-volume-test-preprovisionedpv-hm9q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061881974s
STEP: Saw pod success
Jul  6 19:11:31.368: INFO: Pod "exec-volume-test-preprovisionedpv-hm9q" satisfied condition "Succeeded or Failed"
Jul  6 19:11:31.399: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-hm9q container exec-container-preprovisionedpv-hm9q: <nil>
STEP: delete the pod
Jul  6 19:11:31.467: INFO: Waiting for pod exec-volume-test-preprovisionedpv-hm9q to disappear
Jul  6 19:11:31.497: INFO: Pod exec-volume-test-preprovisionedpv-hm9q no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-hm9q
Jul  6 19:11:31.497: INFO: Deleting pod "exec-volume-test-preprovisionedpv-hm9q" in namespace "volume-2146"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":12,"skipped":114,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
• [SLOW TEST:6.578 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should validate Replicaset Status endpoints [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":5,"skipped":25,"failed":2,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:32.147: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:11:32.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7194" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":6,"skipped":33,"failed":2,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:33.097: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 43 lines ...
• [SLOW TEST:250.043 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a non-local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:295
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:33.437: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 126 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should not modify fsGroup if fsGroupPolicy=None
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":7,"skipped":42,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:34.062: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 50 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:11:35.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-5275" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:35.744: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:11:35.973: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2eb59403-2329-41b8-a6ac-7fc4e6920d5a" in namespace "downward-api-4690" to be "Succeeded or Failed"
Jul  6 19:11:36.005: INFO: Pod "downwardapi-volume-2eb59403-2329-41b8-a6ac-7fc4e6920d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.557866ms
Jul  6 19:11:38.038: INFO: Pod "downwardapi-volume-2eb59403-2329-41b8-a6ac-7fc4e6920d5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064421044s
STEP: Saw pod success
Jul  6 19:11:38.038: INFO: Pod "downwardapi-volume-2eb59403-2329-41b8-a6ac-7fc4e6920d5a" satisfied condition "Succeeded or Failed"
Jul  6 19:11:38.074: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod downwardapi-volume-2eb59403-2329-41b8-a6ac-7fc4e6920d5a container client-container: <nil>
STEP: delete the pod
Jul  6 19:11:38.145: INFO: Waiting for pod downwardapi-volume-2eb59403-2329-41b8-a6ac-7fc4e6920d5a to disappear
Jul  6 19:11:38.177: INFO: Pod downwardapi-volume-2eb59403-2329-41b8-a6ac-7fc4e6920d5a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:11:38.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4690" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:38.271: INFO: Only supported for providers [openstack] (not aws)
... skipping 55 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 25 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-6f44c4c4-4a67-4b06-9a5d-3c7eb8b7e33f
STEP: Creating a pod to test consume secrets
Jul  6 19:11:38.797: INFO: Waiting up to 5m0s for pod "pod-secrets-430058a7-899f-4a4d-b5e9-0b854a477732" in namespace "secrets-8321" to be "Succeeded or Failed"
Jul  6 19:11:38.829: INFO: Pod "pod-secrets-430058a7-899f-4a4d-b5e9-0b854a477732": Phase="Pending", Reason="", readiness=false. Elapsed: 31.509008ms
Jul  6 19:11:40.864: INFO: Pod "pod-secrets-430058a7-899f-4a4d-b5e9-0b854a477732": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06675427s
Jul  6 19:11:42.896: INFO: Pod "pod-secrets-430058a7-899f-4a4d-b5e9-0b854a477732": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098914912s
STEP: Saw pod success
Jul  6 19:11:42.896: INFO: Pod "pod-secrets-430058a7-899f-4a4d-b5e9-0b854a477732" satisfied condition "Succeeded or Failed"
Jul  6 19:11:42.930: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-secrets-430058a7-899f-4a4d-b5e9-0b854a477732 container secret-volume-test: <nil>
STEP: delete the pod
Jul  6 19:11:43.005: INFO: Waiting for pod pod-secrets-430058a7-899f-4a4d-b5e9-0b854a477732 to disappear
Jul  6 19:11:43.037: INFO: Pod pod-secrets-430058a7-899f-4a4d-b5e9-0b854a477732 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:11:43.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8321" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0}
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:11:43.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apply
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Destroying namespace "apply-4078" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for subresources","total":-1,"completed":5,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:43.687: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:11:43.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7195c5f-152b-4db1-a56c-62aca9c09953" in namespace "projected-4484" to be "Succeeded or Failed"
Jul  6 19:11:43.955: INFO: Pod "downwardapi-volume-f7195c5f-152b-4db1-a56c-62aca9c09953": Phase="Pending", Reason="", readiness=false. Elapsed: 31.264227ms
Jul  6 19:11:45.987: INFO: Pod "downwardapi-volume-f7195c5f-152b-4db1-a56c-62aca9c09953": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063106135s
STEP: Saw pod success
Jul  6 19:11:45.987: INFO: Pod "downwardapi-volume-f7195c5f-152b-4db1-a56c-62aca9c09953" satisfied condition "Succeeded or Failed"
Jul  6 19:11:46.018: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod downwardapi-volume-f7195c5f-152b-4db1-a56c-62aca9c09953 container client-container: <nil>
STEP: delete the pod
Jul  6 19:11:46.086: INFO: Waiting for pod downwardapi-volume-f7195c5f-152b-4db1-a56c-62aca9c09953 to disappear
Jul  6 19:11:46.118: INFO: Pod downwardapi-volume-f7195c5f-152b-4db1-a56c-62aca9c09953 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 28 lines ...
• [SLOW TEST:73.571 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted startup probe fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:319
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":8,"skipped":75,"failed":0}
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:11:46.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 54 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 90 lines ...
Jul  6 19:11:42.841: INFO: PersistentVolumeClaim pvc-4zdz6 found but phase is Pending instead of Bound.
Jul  6 19:11:44.885: INFO: PersistentVolumeClaim pvc-4zdz6 found and phase=Bound (8.171814577s)
Jul  6 19:11:44.885: INFO: Waiting up to 3m0s for PersistentVolume local-hzvnq to have phase Bound
Jul  6 19:11:44.917: INFO: PersistentVolume local-hzvnq found and phase=Bound (31.145079ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wjfn
STEP: Creating a pod to test subpath
Jul  6 19:11:45.011: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wjfn" in namespace "provisioning-9223" to be "Succeeded or Failed"
Jul  6 19:11:45.042: INFO: Pod "pod-subpath-test-preprovisionedpv-wjfn": Phase="Pending", Reason="", readiness=false. Elapsed: 30.94548ms
Jul  6 19:11:47.073: INFO: Pod "pod-subpath-test-preprovisionedpv-wjfn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062133938s
STEP: Saw pod success
Jul  6 19:11:47.073: INFO: Pod "pod-subpath-test-preprovisionedpv-wjfn" satisfied condition "Succeeded or Failed"
Jul  6 19:11:47.104: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-wjfn container test-container-subpath-preprovisionedpv-wjfn: <nil>
STEP: delete the pod
Jul  6 19:11:47.181: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wjfn to disappear
Jul  6 19:11:47.212: INFO: Pod pod-subpath-test-preprovisionedpv-wjfn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wjfn
Jul  6 19:11:47.212: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wjfn" in namespace "provisioning-9223"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":61,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:11:47.725: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[It] should check if cluster-info dump succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1088
STEP: running cluster-info dump
Jul  6 19:11:47.924: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2614 cluster-info dump'
Jul  6 19:11:50.210: INFO: stderr: ""
Jul  6 19:11:50.213: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"13244\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-44-51.ca-central-1.compute.internal\",\n                \"uid\": \"63325104-67d6-4441-97d8-03ae4b4af776\",\n                \"resourceVersion\": \"3010\",\n                \"creationTimestamp\": \"2021-07-06T19:02:11Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"c5.large\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ca-central-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ca-central-1a\",\n                    \"kops.k8s.io/instancegroup\": \"master-ca-central-1a\",\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-44-51.ca-central-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"master\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\",\n                    \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"c5.large\",\n                    \"topology.ebs.csi.aws.com/zone\": \"ca-central-1a\",\n                    \"topology.kubernetes.io/region\": \"ca-central-1\",\n                    \"topology.kubernetes.io/zone\": \"ca-central-1a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"ebs.csi.aws.com\\\":\\\"i-0199654295adf8c6d\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.0.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.0.0/24\"\n                ],\n                \"providerID\": \"aws:///ca-central-1a/i-0199654295adf8c6d\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3784324Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3681924Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:07:53Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:02:09Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:07:53Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:02:09Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:07:53Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:02:09Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:07:53Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:02:27Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.44.51\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"35.182.118.89\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-35-182-118-89.ca-central-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec223875553ea40f3fbc9895c36a2bec\",\n                    \"systemUUID\": \"ec223875-553e-a40f-3fbc-9895c36a2bec\",\n                    \"bootID\": \"e41c565f-38ae-46ea-a8db-f5a02c244dc2\",\n                    \"kernelVersion\": \"5.8.0-1038-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.6\",\n                    \"kubeletVersion\": \"v1.22.0-beta.0\",\n                    \"kubeProxyVersion\": \"v1.22.0-beta.0\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                            \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\"\n                        ],\n                        \"sizeBytes\": 171082409\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64@sha256:125a9c5805e1327c0ff2ebf23c71fd9fe2a68203ff118a162e2d04737999db58\",\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0\"\n                        ],\n                        \"sizeBytes\": 133254861\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver-amd64:v1.22.0-beta.0\"\n                        ],\n                        \"sizeBytes\": 127900125\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager-amd64:v1.22.0-beta.0\"\n                        ],\n                        \"sizeBytes\": 122248003\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.1\"\n                        ],\n                        \"sizeBytes\": 113890838\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kops-controller:1.22.0-alpha.1\"\n                        ],\n                        \"sizeBytes\": 112365079\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381\",\n                            \"k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 67082369\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler-amd64:v1.22.0-beta.0\"\n                        ],\n                        \"sizeBytes\": 53004600\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.1\"\n                        ],\n                        \"sizeBytes\": 25632279\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-provider-aws/cloud-controller-manager@sha256:6e0084ecedc8d6d2b0f5cb984c4fe6c860c8d7283c173145b0eaeaaff35ba98a\",\n                            \"gcr.io/k8s-staging-provider-aws/cloud-controller-manager:latest\"\n                        ],\n                        \"sizeBytes\": 16211866\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-51-240.ca-central-1.compute.internal\",\n                \"uid\": \"eb1cf5c0-53a1-4bdc-be05-fb5f7c64313a\",\n                \"resourceVersion\": \"13138\",\n                \"creationTimestamp\": \"2021-07-06T19:03:55Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ca-central-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ca-central-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ca-central-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-51-240.ca-central-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.ebs.csi.aws.com/zone\": \"ca-central-1a\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-51-240.ca-central-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"ca-central-1\",\n                    \"topology.kubernetes.io/zone\": \"ca-central-1a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-1535\\\":\\\"ip-172-20-51-240.ca-central-1.compute.internal\\\",\\\"csi-mock-csi-mock-volumes-1807\\\":\\\"csi-mock-csi-mock-volumes-1807\\\",\\\"ebs.csi.aws.com\\\":\\\"i-005381dcd763db5a9\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.3.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.3.0/24\"\n                ],\n                \"providerID\": \"aws:///ca-central-1a/i-005381dcd763db5a9\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968648Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866248Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:45Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:55Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:45Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:55Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:45Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:55Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:45Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:56Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.51.240\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"99.79.64.57\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-51-240.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-51-240.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-99-79-64-57.ca-central-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2942c2fe1edf0a7e44aa300a0336da\",\n                    \"systemUUID\": \"ec2942c2-fe1e-df0a-7e44-aa300a0336da\",\n                    \"bootID\": \"9b1a0337-86dd-42c1-bb9e-67ea7c3bd673\",\n                    \"kernelVersion\": \"5.8.0-1038-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.6\",\n                    \"kubeletVersion\": \"v1.22.0-beta.0\",\n                    \"kubeProxyVersion\": \"v1.22.0-beta.0\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0\"\n                        ],\n                        \"sizeBytes\": 133254861\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89\",\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4\"\n                        ],\n                        \"sizeBytes\": 112029652\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2\",\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs:1.2\"\n                        ],\n                        \"sizeBytes\": 95843946\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381\",\n                            \"k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 67082369\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n                        ],\n                        \"sizeBytes\": 41902332\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 22629806\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.2.0\"\n                        ],\n                        \"sizeBytes\": 21367429\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07\",\n                            \"k8s.gcr.io/pause:3.5\"\n                        ],\n                        \"sizeBytes\": 301416\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/csi-hostpath-ephemeral-1535^f9ca174b-de8d-11eb-a382-da608be681a6\",\n                    \"kubernetes.io/csi/csi-hostpath-ephemeral-1535^fdc8324f-de8d-11eb-a382-da608be681a6\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-ephemeral-1535^f9ca174b-de8d-11eb-a382-da608be681a6\",\n                        \"devicePath\": \"\"\n                    },\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-ephemeral-1535^fdc8324f-de8d-11eb-a382-da608be681a6\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-56-177.ca-central-1.compute.internal\",\n                \"uid\": \"8f6fcee4-a438-4a92-ba56-7da5da2ae33a\",\n                \"resourceVersion\": \"12325\",\n                \"creationTimestamp\": \"2021-07-06T19:03:48Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ca-central-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ca-central-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ca-central-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-56-177.ca-central-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.ebs.csi.aws.com/zone\": \"ca-central-1a\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-56-177.ca-central-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"ca-central-1\",\n                    \"topology.kubernetes.io/zone\": \"ca-central-1a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-volume-expand-2957\\\":\\\"ip-172-20-56-177.ca-central-1.compute.internal\\\",\\\"ebs.csi.aws.com\\\":\\\"i-0d03670fa0e8e284c\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.2.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.2.0/24\"\n                ],\n                \"providerID\": \"aws:///ca-central-1a/i-0d03670fa0e8e284c\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968648Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866248Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:19Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:48Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:19Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:48Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:19Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:48Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:19Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:49Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.56.177\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"35.182.214.86\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-56-177.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-56-177.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-35-182-214-86.ca-central-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2504ce9417122675416db63e3130c6\",\n                    \"systemUUID\": \"ec2504ce-9417-1226-7541-6db63e3130c6\",\n                    \"bootID\": \"07587ff9-619b-4645-b0a7-7d41b1377509\",\n                    \"kernelVersion\": \"5.8.0-1038-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.6\",\n                    \"kubeletVersion\": \"v1.22.0-beta.0\",\n                    \"kubeProxyVersion\": \"v1.22.0-beta.0\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0\"\n                        ],\n                        \"sizeBytes\": 133254861\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2\",\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs:1.2\"\n                        ],\n                        \"sizeBytes\": 95843946\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381\",\n                            \"k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 67082369\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n                        ],\n                        \"sizeBytes\": 41902332\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/build-image/debian-iptables@sha256:27aaf19acbe10bed00c190b549f29cb774d444486c91b341122ef3d661f913c9\",\n                            \"k8s.gcr.io/build-image/debian-iptables:buster-v1.6.2\"\n                        ],\n                        \"sizeBytes\": 40458573\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1\"\n                        ],\n                        \"sizeBytes\": 22631062\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 22629806\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.2.0\"\n                        ],\n                        \"sizeBytes\": 21584611\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.2.1\"\n                        ],\n                        \"sizeBytes\": 21366448\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1\"\n                        ],\n                        \"sizeBytes\": 21331336\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b\",\n                            \"k8s.gcr.io/e2e-test-images/nonroot:1.1\"\n                        ],\n                        \"sizeBytes\": 17748448\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1\"\n                        ],\n                        \"sizeBytes\": 14930811\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                            \"docker.io/coredns/coredns:1.8.3\"\n                        ],\n                        \"sizeBytes\": 12893350\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8561694\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.3.0\"\n                        ],\n                        \"sizeBytes\": 7933739\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07\",\n                            \"k8s.gcr.io/pause:3.5\"\n                        ],\n                        \"sizeBytes\": 301416\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-61-17.ca-central-1.compute.internal\",\n                \"uid\": \"2630da57-08e4-492f-9ed3-4032ca14133e\",\n                \"resourceVersion\": \"12945\",\n                \"creationTimestamp\": \"2021-07-06T19:04:00Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ca-central-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ca-central-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ca-central-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-61-17.ca-central-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.ebs.csi.aws.com/zone\": \"ca-central-1a\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-61-17.ca-central-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"ca-central-1\",\n                    \"topology.kubernetes.io/zone\": \"ca-central-1a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"ebs.csi.aws.com\\\":\\\"i-0e903739c9d695a33\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.4.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.4.0/24\"\n                ],\n                \"providerID\": \"aws:///ca-central-1a/i-0e903739c9d695a33\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968640Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866240Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:30Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:04:00Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:30Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:04:00Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:30Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:04:00Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:30Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:04:10Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.61.17\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"99.79.64.126\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-61-17.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-61-17.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-99-79-64-126.ca-central-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2748534306bf648c5519b910adbf29\",\n                    \"systemUUID\": \"ec274853-4306-bf64-8c55-19b910adbf29\",\n                    \"bootID\": \"76431a90-48f9-43ff-8ae4-7614a8b5040e\",\n                    \"kernelVersion\": \"5.8.0-1038-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.6\",\n                    \"kubeletVersion\": \"v1.22.0-beta.0\",\n                    \"kubeProxyVersion\": \"v1.22.0-beta.0\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0\"\n                        ],\n                        \"sizeBytes\": 133254861\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2\",\n                            \"k8s.gcr.io/etcd:3.4.13-0\"\n                        ],\n                        \"sizeBytes\": 86742272\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381\",\n                            \"k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 67082369\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276\",\n                            \"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4\"\n                        ],\n                        \"sizeBytes\": 24757245\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1\"\n                        ],\n                        \"sizeBytes\": 22631062\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 22629806\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.2.0\"\n                        ],\n                        \"sizeBytes\": 21584611\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.2.0\"\n                        ],\n                        \"sizeBytes\": 21367429\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.2.1\"\n                        ],\n                        \"sizeBytes\": 21366448\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1\"\n                        ],\n                        \"sizeBytes\": 21331336\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1\"\n                        ],\n                        \"sizeBytes\": 14930811\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8561694\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.3.0\"\n                        ],\n                        \"sizeBytes\": 7933739\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07\",\n                            \"k8s.gcr.io/pause:3.5\"\n                        ],\n                        \"sizeBytes\": 301416\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/csi-hostpath-provisioning-6302^ef8bb2cf-de8d-11eb-aaed-2e17482b0c90\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-provisioning-6302^ef8bb2cf-de8d-11eb-aaed-2e17482b0c90\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-61-241.ca-central-1.compute.internal\",\n                \"uid\": \"35e40922-48e7-4002-a2c3-ad6b22d85f32\",\n                \"resourceVersion\": \"13071\",\n                \"creationTimestamp\": \"2021-07-06T19:03:44Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ca-central-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ca-central-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ca-central-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-61-241.ca-central-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.ebs.csi.aws.com/zone\": \"ca-central-1a\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-61-241.ca-central-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"ca-central-1\",\n                    \"topology.kubernetes.io/zone\": \"ca-central-1a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"ebs.csi.aws.com\\\":\\\"i-0e332ec24c0ae4be0\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.1.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.1.0/24\"\n                ],\n                \"providerID\": \"aws:///ca-central-1a/i-0e332ec24c0ae4be0\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968648Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866248Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:15Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:44Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:15Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:44Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:15Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:44Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-07-06T19:11:15Z\",\n                        \"lastTransitionTime\": \"2021-07-06T19:03:45Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.61.241\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"35.183.199.99\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-35-183-199-99.ca-central-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2781228b2cb445eec750ad50267332\",\n                    \"systemUUID\": \"ec278122-8b2c-b445-eec7-50ad50267332\",\n                    \"bootID\": \"3c251c6e-8b30-47ad-a993-68c5a97ab5e4\",\n                    \"kernelVersion\": \"5.8.0-1038-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.6\",\n                    \"kubeletVersion\": \"v1.22.0-beta.0\",\n                    \"kubeProxyVersion\": \"v1.22.0-beta.0\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0\"\n                        ],\n                        \"sizeBytes\": 133254861\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381\",\n                            \"k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 67082369\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n                        ],\n                        \"sizeBytes\": 41902332\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1\"\n                        ],\n                        \"sizeBytes\": 22631062\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 22629806\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.2.0\"\n                        ],\n                        \"sizeBytes\": 21584611\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.2.0\"\n                        ],\n                        \"sizeBytes\": 21367429\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.2.1\"\n                        ],\n                        \"sizeBytes\": 21366448\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1\"\n                        ],\n                        \"sizeBytes\": 21331336\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 20194320\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:67640771ad9fc56f109d5b01e020f0c858e7c890bb0eb15ba0ebd325df3285e7\",\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\"\n                        ],\n                        \"sizeBytes\": 15191740\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1\"\n                        ],\n                        \"sizeBytes\": 14930811\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                            \"docker.io/coredns/coredns:1.8.3\"\n                        ],\n                        \"sizeBytes\": 12893350\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8561694\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.3.0\"\n                        ],\n                        \"sizeBytes\": 7933739\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07\",\n                            \"k8s.gcr.io/pause:3.5\"\n                        ],\n                        \"sizeBytes\": 301416\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/csi-hostpath-provisioning-1848^6ce6f668-de8d-11eb-b0b2-5e7b10396682\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-provisioning-1848^6ce6f668-de8d-11eb-b0b2-5e7b10396682\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"4662\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"aws-cloud-controller-manager-5n582.168f48e7211b8fb0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"342cbf84-d12b-4f13-b853-eab6df2a7569\",\n                \"resourceVersion\": \"66\",\n                \"creationTimestamp\": \"2021-07-06T19:02:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"aws-cloud-controller-manager-5n582\",\n                \"uid\": \"8399e140-b455-4fd0-8a2f-d2a1acfc5f3b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"448\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/aws-cloud-controller-manager-5n582 to ip-172-20-44-51.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:28Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"aws-cloud-controller-manager-5n582.168f48e73c4b13c5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"85976ce7-4402-46a2-8391-6bac87801854\",\n                \"resourceVersion\": \"70\",\n                \"creationTimestamp\": \"2021-07-06T19:02:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"aws-cloud-controller-manager-5n582\",\n                \"uid\": \"8399e140-b455-4fd0-8a2f-d2a1acfc5f3b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"462\",\n                \"fieldPath\": \"spec.containers{aws-cloud-controller-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"gcr.io/k8s-staging-provider-aws/cloud-controller-manager:latest\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:28Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"aws-cloud-controller-manager-5n582.168f48e863c897a3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"12bf9c35-2810-45ac-b39e-3444014a3ca6\",\n                \"resourceVersion\": \"76\",\n                \"creationTimestamp\": \"2021-07-06T19:02:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"aws-cloud-controller-manager-5n582\",\n                \"uid\": \"8399e140-b455-4fd0-8a2f-d2a1acfc5f3b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"462\",\n                \"fieldPath\": \"spec.containers{aws-cloud-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"gcr.io/k8s-staging-provider-aws/cloud-controller-manager:latest\\\" in 4.957303539s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:33Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"aws-cloud-controller-manager-5n582.168f48e863ccb7cd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"92994d3d-dfa9-4ce0-a773-a64ba3ddc8e9\",\n                \"resourceVersion\": \"79\",\n                \"creationTimestamp\": \"2021-07-06T19:02:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"aws-cloud-controller-manager-5n582\",\n                \"uid\": \"8399e140-b455-4fd0-8a2f-d2a1acfc5f3b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"462\",\n                \"fieldPath\": \"spec.containers{aws-cloud-controller-manager}\"\n            },\n            \"reason\": \"Failed\",\n            \"message\": \"Error: services have not yet been read at least once, cannot construct envvars\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:33Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:33Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"aws-cloud-controller-manager-5n582.168f48e86d295664\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"00c43f3d-2f2c-4cba-8458-66cb70a82e54\",\n                \"resourceVersion\": \"94\",\n                \"creationTimestamp\": \"2021-07-06T19:02:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"aws-cloud-controller-manager-5n582\",\n                \"uid\": \"8399e140-b455-4fd0-8a2f-d2a1acfc5f3b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"462\",\n                \"fieldPath\": \"spec.containers{aws-cloud-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"gcr.io/k8s-staging-provider-aws/cloud-controller-manager:latest\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:33Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:48Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"aws-cloud-controller-manager-5n582.168f48ebe09b789b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4648465b-5538-40d2-a279-069bccceef05\",\n                \"resourceVersion\": \"95\",\n                \"creationTimestamp\": \"2021-07-06T19:02:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"aws-cloud-controller-manager-5n582\",\n                \"uid\": \"8399e140-b455-4fd0-8a2f-d2a1acfc5f3b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"462\",\n                \"fieldPath\": \"spec.containers{aws-cloud-controller-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container aws-cloud-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:48Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:48Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"aws-cloud-controller-manager-5n582.168f48ebe5f7eac2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"765421b0-5562-409d-8a37-398b30836845\",\n                \"resourceVersion\": \"96\",\n                \"creationTimestamp\": \"2021-07-06T19:02:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"aws-cloud-controller-manager-5n582\",\n                \"uid\": \"8399e140-b455-4fd0-8a2f-d2a1acfc5f3b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"462\",\n                \"fieldPath\": \"spec.containers{aws-cloud-controller-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container aws-cloud-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:48Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:48Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"aws-cloud-controller-manager.168f48e71e8b1d6d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3d2db76c-4d6e-4f1e-abc7-9547c9a2dc19\",\n                \"resourceVersion\": \"61\",\n                \"creationTimestamp\": \"2021-07-06T19:02:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"aws-cloud-controller-manager\",\n                \"uid\": \"058dfdc3-af1a-4214-8b88-7a10396719e6\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"430\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: aws-cloud-controller-manager-5n582\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cloud-controller-manager.168f48ec145f0a8a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3c858673-2c5a-405c-b00b-d44ad157032a\",\n                \"resourceVersion\": \"97\",\n                \"creationTimestamp\": \"2021-07-06T19:02:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cloud-controller-manager\",\n                \"uid\": \"4a835748-b576-42ac-9cb7-0f7c66382517\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"503\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-44-51_b704b51a-8e84-4bf9-9e3a-13f6e3782ef2 became leader\",\n            \"source\": {\n                \"component\": \"cloud-controller-manager\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:49Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:49Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb.168f48e71e5701aa\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"356e1100-034b-4232-a4ce-cd03aeaf300e\",\n                \"resourceVersion\": \"59\",\n                \"creationTimestamp\": \"2021-07-06T19:02:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb\",\n                \"uid\": \"c1975141-0fcf-42e2-8ca4-5c2f1a095c80\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"438\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb.168f48f92887f865\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"93a1d077-5ed5-4511-895b-5451be469fe6\",\n                \"resourceVersion\": \"130\",\n                \"creationTimestamp\": \"2021-07-06T19:03:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb\",\n                \"uid\": \"c1975141-0fcf-42e2-8ca4-5c2f1a095c80\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"450\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-autoscaler-6f594f4c58-nw8gb to ip-172-20-61-241.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:45Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb.168f48f9bd3edae6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"98001ca5-5f30-4d74-bb14-a552487f0b0e\",\n                \"resourceVersion\": \"151\",\n                \"creationTimestamp\": \"2021-07-06T19:03:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb\",\n                \"uid\": \"c1975141-0fcf-42e2-8ca4-5c2f1a095c80\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:47Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb.168f48fa617d4b67\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"08924f9a-06af-4bf9-b698-4f511c668874\",\n                \"resourceVersion\": \"174\",\n                \"creationTimestamp\": \"2021-07-06T19:03:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb\",\n                \"uid\": \"c1975141-0fcf-42e2-8ca4-5c2f1a095c80\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\\\" in 2.755515796s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:50Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:50Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb.168f48fa689093aa\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c8927892-0679-4239-be6b-c8cc691e03bb\",\n                \"resourceVersion\": \"176\",\n                \"creationTimestamp\": \"2021-07-06T19:03:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb\",\n                \"uid\": \"c1975141-0fcf-42e2-8ca4-5c2f1a095c80\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:50Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:50Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb.168f48fa6d42af00\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0ebc4c65-c68e-4212-959b-43b173829b8b\",\n                \"resourceVersion\": \"177\",\n                \"creationTimestamp\": \"2021-07-06T19:03:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-nw8gb\",\n                \"uid\": \"c1975141-0fcf-42e2-8ca4-5c2f1a095c80\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:50Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:50Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58.168f48e71b677305\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"18d6d51c-0034-449f-9d90-2ff895b10e54\",\n                \"resourceVersion\": \"65\",\n                \"creationTimestamp\": \"2021-07-06T19:02:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58\",\n                \"uid\": \"915d7409-fa36-4e16-a138-db3021565989\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"412\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-autoscaler-6f594f4c58-nw8gb\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler.168f48e710cfb850\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7edf0dd6-9156-4a8d-b747-c7585747abd6\",\n                \"resourceVersion\": \"46\",\n                \"creationTimestamp\": \"2021-07-06T19:02:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler\",\n                \"uid\": \"17b84c2a-c497-4019-92f4-8018d8088187\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"364\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-autoscaler-6f594f4c58 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-l9p5d.168f48fa7e14a117\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4cde8744-f005-42fc-94c1-056bb866af13\",\n                \"resourceVersion\": \"182\",\n                \"creationTimestamp\": \"2021-07-06T19:03:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-l9p5d\",\n                \"uid\": \"8623f226-1b49-42a9-a7ed-d934951d723e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"753\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-f45c4bf76-l9p5d to ip-172-20-56-177.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:51Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-l9p5d.168f48fa9ed74260\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b4f97127-55ea-46dc-9c6d-30dc1b17cc5b\",\n                \"resourceVersion\": \"197\",\n                \"creationTimestamp\": \"2021-07-06T19:03:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-l9p5d\",\n                \"uid\": \"8623f226-1b49-42a9-a7ed-d934951d723e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"756\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"coredns/coredns:1.8.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-177.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:51Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-l9p5d.168f48fb81235917\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f3aeb23c-b2ff-4091-ac12-a5056bd04adb\",\n                \"resourceVersion\": \"213\",\n                \"creationTimestamp\": \"2021-07-06T19:03:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-l9p5d\",\n                \"uid\": \"8623f226-1b49-42a9-a7ed-d934951d723e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"756\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"coredns/coredns:1.8.3\\\" in 3.796617313s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-177.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:55Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-l9p5d.168f48fb89cd0205\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"83a1b21c-3a65-4b15-8626-eee8a826fe20\",\n                \"resourceVersion\": \"214\",\n                \"creationTimestamp\": \"2021-07-06T19:03:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-l9p5d\",\n                \"uid\": \"8623f226-1b49-42a9-a7ed-d934951d723e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"756\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-177.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:55Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-l9p5d.168f48fb8edda003\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"99e4548f-0f9a-455d-a7ba-c71dab3faf04\",\n                \"resourceVersion\": \"215\",\n                \"creationTimestamp\": \"2021-07-06T19:03:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-l9p5d\",\n                \"uid\": \"8623f226-1b49-42a9-a7ed-d934951d723e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"756\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-177.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:55Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-t6pzg.168f48e71ef76fab\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8b266c74-a82f-4b48-9cbf-c37b2b810e62\",\n                \"resourceVersion\": \"63\",\n                \"creationTimestamp\": \"2021-07-06T19:02:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-t6pzg\",\n                \"uid\": \"8ace7a95-b3b7-4a6f-bcc9-4cb2f03613d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"439\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-t6pzg.168f48f928b45dc3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"138de6db-ab20-434f-ae3e-0a41bf662796\",\n                \"resourceVersion\": \"132\",\n                \"creationTimestamp\": \"2021-07-06T19:03:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-t6pzg\",\n                \"uid\": \"8ace7a95-b3b7-4a6f-bcc9-4cb2f03613d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"453\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-f45c4bf76-t6pzg to ip-172-20-61-241.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:45Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-t6pzg.168f48f9bbae75b8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a67ab350-c7e6-44af-b5d3-ca8efb619d77\",\n                \"resourceVersion\": \"150\",\n                \"creationTimestamp\": \"2021-07-06T19:03:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-t6pzg\",\n                \"uid\": \"8ace7a95-b3b7-4a6f-bcc9-4cb2f03613d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"705\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"coredns/coredns:1.8.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:47Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-t6pzg.168f48fa052e492d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f5f051aa-aed1-4502-a9b3-6492cb7f9d40\",\n                \"resourceVersion\": \"157\",\n                \"creationTimestamp\": \"2021-07-06T19:03:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-t6pzg\",\n                \"uid\": \"8ace7a95-b3b7-4a6f-bcc9-4cb2f03613d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"705\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"coredns/coredns:1.8.3\\\" in 1.233099707s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:49Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:49Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-t6pzg.168f48fa0ea23cbc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"70be5374-246d-445c-8479-1cbb681fb8cf\",\n                \"resourceVersion\": \"158\",\n                \"creationTimestamp\": \"2021-07-06T19:03:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-t6pzg\",\n                \"uid\": \"8ace7a95-b3b7-4a6f-bcc9-4cb2f03613d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"705\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:49Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:49Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-t6pzg.168f48fa1373efa4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"38380da4-6693-4be0-aefc-c7d064f680d5\",\n                \"resourceVersion\": \"159\",\n                \"creationTimestamp\": \"2021-07-06T19:03:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-t6pzg\",\n                \"uid\": \"8ace7a95-b3b7-4a6f-bcc9-4cb2f03613d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"705\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:49Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:49Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76.168f48e71a46f4f4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8d3e9f3b-e213-4562-b403-ecf254f3c638\",\n                \"resourceVersion\": \"62\",\n                \"creationTimestamp\": \"2021-07-06T19:02:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76\",\n                \"uid\": \"f50a520f-b980-4105-8332-95e295c5a335\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"414\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-f45c4bf76-t6pzg\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76.168f48fa7d5f8fa2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7c59b7dd-5c6b-4434-87c0-70ffb4a80dd4\",\n                \"resourceVersion\": \"180\",\n                \"creationTimestamp\": \"2021-07-06T19:03:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76\",\n                \"uid\": \"f50a520f-b980-4105-8332-95e295c5a335\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"752\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-f45c4bf76-l9p5d\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:51Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.168f48e71112c18f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6c0d15ef-0c36-484f-ab8d-2ab7f0ce0e4c\",\n                \"resourceVersion\": \"47\",\n                \"creationTimestamp\": \"2021-07-06T19:02:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"416839f7-2a27-48f7-a428-0eebfaff581c\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"355\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-f45c4bf76 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.168f48fa7d13a92c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"190b1eaf-1117-4684-b98c-3a4ace1032ed\",\n                \"resourceVersion\": \"179\",\n                \"creationTimestamp\": \"2021-07-06T19:03:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"416839f7-2a27-48f7-a428-0eebfaff581c\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"751\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-f45c4bf76 to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:51Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5db8dc7c7-gf89c.168f48e71abacb52\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9ad1994f-bb33-49a5-ae0d-30dfbdf2af7c\",\n                \"resourceVersion\": \"57\",\n                \"creationTimestamp\": \"2021-07-06T19:02:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5db8dc7c7-gf89c\",\n                \"uid\": \"49d2b614-a0bf-4bd4-bf86-b161d13cc485\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/dns-controller-5db8dc7c7-gf89c to ip-172-20-44-51.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5db8dc7c7-gf89c.168f48e736d2792c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8c5471c2-a1b0-4b69-860f-6cf08ae4d03f\",\n                \"resourceVersion\": \"91\",\n                \"creationTimestamp\": \"2021-07-06T19:02:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5db8dc7c7-gf89c\",\n                \"uid\": \"49d2b614-a0bf-4bd4-bf86-b161d13cc485\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"442\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:28Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:43Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5db8dc7c7-gf89c.168f48e736d38f4d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"128dae69-4181-4fbb-9507-c5cf4607cbd2\",\n                \"resourceVersion\": \"72\",\n                \"creationTimestamp\": \"2021-07-06T19:02:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5db8dc7c7-gf89c\",\n                \"uid\": \"49d2b614-a0bf-4bd4-bf86-b161d13cc485\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"442\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Failed\",\n            \"message\": \"Error: services have not yet been read at least once, cannot construct envvars\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:28Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:28Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5db8dc7c7-gf89c.168f48eaf38bf532\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ee7119f4-b9d7-446c-b1f8-cdb6630f5e14\",\n                \"resourceVersion\": \"92\",\n                \"creationTimestamp\": \"2021-07-06T19:02:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5db8dc7c7-gf89c\",\n                \"uid\": \"49d2b614-a0bf-4bd4-bf86-b161d13cc485\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"442\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:44Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5db8dc7c7-gf89c.168f48eaf8f9fe25\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bcc1d107-2544-4682-98e5-35478e7f171e\",\n                \"resourceVersion\": \"93\",\n                \"creationTimestamp\": \"2021-07-06T19:02:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5db8dc7c7-gf89c\",\n                \"uid\": \"49d2b614-a0bf-4bd4-bf86-b161d13cc485\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"442\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-51.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:44Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5db8dc7c7.168f48e71b5a9f5a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"14ba2c30-72ee-40e4-88e9-2aa7755b27e7\",\n                \"resourceVersion\": \"64\",\n                \"creationTimestamp\": \"2021-07-06T19:02:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5db8dc7c7\",\n                \"uid\": \"ed9f99cd-ee20-40a5-93e8-a663e383aeb7\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"413\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: dns-controller-5db8dc7c7-gf89c\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller.168f48e7112a4cda\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e1197f66-f363-46c1-9441-93cf947ca0f0\",\n                \"resourceVersion\": \"50\",\n                \"creationTimestamp\": \"2021-07-06T19:02:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller\",\n                \"uid\": \"b4dfd899-f391-4d68-bb5f-831d7d31799c\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"312\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set dns-controller-5db8dc7c7 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-aws-com.168f48fd01c6e345\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f4b95370-86a4-4897-8a1a-9da40bffd1cb\",\n                \"resourceVersion\": \"246\",\n                \"creationTimestamp\": \"2021-07-06T19:04:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-aws-com\",\n                \"uid\": \"792f20b3-d457-4a91-89fd-083bb358dfde\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"842\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ebs-csi-controller-566c97f85c-p6htc became leader\",\n            \"source\": {\n                \"component\": \"ebs.csi.aws.com/ebs-csi-controller-566c97f85c-p6htc\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:04:01Z\",\n            \"lastTimestamp\": \"2021-07-06T19:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48e71911fe73\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b6369be3-6807-49d2-9835-60f3283293b9\",\n                \"resourceVersion\": \"53\",\n                \"creationTimestamp\": \"2021-07-06T19:02:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"437\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48f928ae6b75\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d60fc254-4ce5-4c76-ba9b-bdb25769237d\",\n                \"resourceVersion\": \"131\",\n                \"creationTimestamp\": \"2021-07-06T19:03:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"449\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/ebs-csi-controller-566c97f85c-p6htc to ip-172-20-61-241.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:45Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48f9c127f9eb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d11f5440-7338-484f-82b0-aab68b2261c3\",\n                \"resourceVersion\": \"154\",\n                \"creationTimestamp\": \"2021-07-06T19:03:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{ebs-plugin}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:48Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:48Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fb1ae82128\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c49ddae5-4d56-4604-a586-0f7527e56ad5\",\n                \"resourceVersion\": \"198\",\n                \"creationTimestamp\": \"2021-07-06T19:03:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{ebs-plugin}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0\\\" in 5.800712499s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:53Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fb40491f26\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f07894ce-d804-4d43-ad6a-c68af79247b2\",\n                \"resourceVersion\": \"201\",\n                \"creationTimestamp\": \"2021-07-06T19:03:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{ebs-plugin}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container ebs-plugin\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:54Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fb47719177\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c5f4f0c6-b2f5-45a8-b66f-28547a53a8b9\",\n                \"resourceVersion\": \"203\",\n                \"creationTimestamp\": \"2021-07-06T19:03:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{ebs-plugin}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container ebs-plugin\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:54Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fb47b3e5d7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fb982d63-fd2c-4887-b1e5-68d66d51335f\",\n                \"resourceVersion\": \"204\",\n                \"creationTimestamp\": \"2021-07-06T19:03:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-provisioner}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:54Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fba410bcef\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ac22be81-ea5f-477d-ade1-625147ad53e5\",\n                \"resourceVersion\": \"216\",\n                \"creationTimestamp\": \"2021-07-06T19:03:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-provisioner}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0\\\" in 1.549571974s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:56Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fbac806cd7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bed17c35-1a2a-4d8d-ac10-545eb2431ab3\",\n                \"resourceVersion\": \"217\",\n                \"creationTimestamp\": \"2021-07-06T19:03:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-provisioner}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container csi-provisioner\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:56Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fbb1b9592b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"609c9498-77db-4190-8dd4-c1d38061a8fa\",\n                \"resourceVersion\": \"218\",\n                \"creationTimestamp\": \"2021-07-06T19:03:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-provisioner}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container csi-provisioner\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:56Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fbb1c80b9c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e0616f0d-9bb9-49fb-8f3b-e4de446ffcc5\",\n                \"resourceVersion\": \"219\",\n                \"creationTimestamp\": \"2021-07-06T19:03:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-attacher}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/sig-storage/csi-attacher:v3.2.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:56Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fc5b67ec96\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9ac561b6-8a63-4d75-ba11-c15ef17f60ac\",\n                \"resourceVersion\": \"233\",\n                \"creationTimestamp\": \"2021-07-06T19:03:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-attacher}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/sig-storage/csi-attacher:v3.2.0\\\" in 2.845813141s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:59Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fc62b3c197\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"934d1656-459a-421c-9075-4af3fa88c91b\",\n                \"resourceVersion\": \"234\",\n                \"creationTimestamp\": \"2021-07-06T19:03:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-attacher}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container csi-attacher\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:59Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fc6804cf63\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8f35aaf1-941c-48f1-9328-9e2e97e833da\",\n                \"resourceVersion\": \"235\",\n                \"creationTimestamp\": \"2021-07-06T19:03:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-attacher}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container csi-attacher\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:59Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fc68167275\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"254299bf-2941-400a-8ad0-f69842200e8d\",\n                \"resourceVersion\": \"236\",\n                \"creationTimestamp\": \"2021-07-06T19:03:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-snapshotter}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:03:59Z\",\n            \"lastTimestamp\": \"2021-07-06T19:03:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fcf440a666\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cb6e3575-fd38-4196-bff3-722cb99c081e\",\n                \"resourceVersion\": \"244\",\n                \"creationTimestamp\": \"2021-07-06T19:04:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-snapshotter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\\\" in 2.351540262s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:04:01Z\",\n            \"lastTimestamp\": \"2021-07-06T19:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fcfbd7217e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f3fb8cd4-ecb4-4ff3-93c8-94456d06eec3\",\n                \"resourceVersion\": \"245\",\n                \"creationTimestamp\": \"2021-07-06T19:04:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-snapshotter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container csi-snapshotter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:04:01Z\",\n            \"lastTimestamp\": \"2021-07-06T19:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fd062bf23f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9dacf264-5cc0-4d8e-a831-3d7eafb98720\",\n                \"resourceVersion\": \"247\",\n                \"creationTimestamp\": \"2021-07-06T19:04:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-snapshotter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container csi-snapshotter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:04:02Z\",\n            \"lastTimestamp\": \"2021-07-06T19:04:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fd0637eb8d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0f8e0311-a8c9-43c4-92e9-48d88b4c0ba0\",\n                \"resourceVersion\": \"248\",\n                \"creationTimestamp\": \"2021-07-06T19:04:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-resizer}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:04:02Z\",\n            \"lastTimestamp\": \"2021-07-06T19:04:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fd54011dae\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"918c9636-6c7e-4f67-81d9-41edc84c9571\",\n                \"resourceVersion\": \"252\",\n                \"creationTimestamp\": \"2021-07-06T19:04:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-resizer}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\\\" in 1.305004147s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"lastTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fd5b072458\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5c7382cb-c621-48f6-b36c-25689fbb67e5\",\n                \"resourceVersion\": \"253\",\n                \"creationTimestamp\": \"2021-07-06T19:04:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-resizer}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container csi-resizer\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"lastTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fd61b8d4bc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d889ca2d-012f-49a0-8327-b78a0f5ef00b\",\n                \"resourceVersion\": \"254\",\n                \"creationTimestamp\": \"2021-07-06T19:04:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{csi-resizer}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container csi-resizer\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"lastTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fd61dd919d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cb6b801b-21af-4cbf-bbd6-7d92ae5c501f\",\n                \"resourceVersion\": \"255\",\n                \"creationTimestamp\": \"2021-07-06T19:04:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{liveness-probe}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"lastTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fd64e8a20d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c137a2d1-d249-430a-b8fc-6371322feacf\",\n                \"resourceVersion\": \"256\",\n                \"creationTimestamp\": \"2021-07-06T19:04:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{liveness-probe}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container liveness-probe\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"lastTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc.168f48fd6a493aca\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"59b00a28-1efa-4af2-915b-009402afa1fc\",\n                \"resourceVersion\": \"257\",\n                \"creationTimestamp\": \"2021-07-06T19:04:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c-p6htc\",\n                \"uid\": \"24a04d49-1b90-489b-bdc7-c6ee7addd41b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"704\",\n                \"fieldPath\": \"spec.containers{liveness-probe}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container liveness-probe\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-241.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"lastTimestamp\": \"2021-07-06T19:04:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ebs-csi-controller-566c97f85c.168f48e718900e00\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d9a7a813-83d8-4547-9944-ed35b1819e01\",\n                \"resourceVersion\": \"58\",\n                \"creationTimestamp\": \"2021-07-06T19:02:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ebs-csi-controller-566c97f85c\",\n                \"uid\": \"cc9ab806-0969-41e3-a3af-f6da2f326849\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"415\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: ebs-csi-controller-566c97f85c-p6htc\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-07-06T19:02:27Z\",\n            \"lastTimestamp\": \"2021-07-






... skipping 68402 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:31:42.934: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0002b8240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:988
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":54,"skipped":520,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:31:46.434: INFO: Only supported for providers [gce gke] (not aws)
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":26,"skipped":217,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access"]}

SSSS
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 74 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":36,"skipped":294,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [sig-instrumentation] Events API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:32:01.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:32:02.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1918" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":37,"skipped":294,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:02.281: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":522,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:31:48.830: INFO: >>> kubeConfig: /root/.kube/config
... skipping 15 lines ...
Jul  6 19:31:57.560: INFO: PersistentVolumeClaim pvc-tsjgl found but phase is Pending instead of Bound.
Jul  6 19:31:59.594: INFO: PersistentVolumeClaim pvc-tsjgl found and phase=Bound (8.159158603s)
Jul  6 19:31:59.594: INFO: Waiting up to 3m0s for PersistentVolume local-jb467 to have phase Bound
Jul  6 19:31:59.625: INFO: PersistentVolume local-jb467 found and phase=Bound (30.533534ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2t5m
STEP: Creating a pod to test subpath
Jul  6 19:31:59.723: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2t5m" in namespace "provisioning-9814" to be "Succeeded or Failed"
Jul  6 19:31:59.753: INFO: Pod "pod-subpath-test-preprovisionedpv-2t5m": Phase="Pending", Reason="", readiness=false. Elapsed: 30.597009ms
Jul  6 19:32:01.784: INFO: Pod "pod-subpath-test-preprovisionedpv-2t5m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06183113s
Jul  6 19:32:03.816: INFO: Pod "pod-subpath-test-preprovisionedpv-2t5m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093066168s
STEP: Saw pod success
Jul  6 19:32:03.816: INFO: Pod "pod-subpath-test-preprovisionedpv-2t5m" satisfied condition "Succeeded or Failed"
Jul  6 19:32:03.847: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-2t5m container test-container-subpath-preprovisionedpv-2t5m: <nil>
STEP: delete the pod
Jul  6 19:32:03.921: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2t5m to disappear
Jul  6 19:32:03.951: INFO: Pod pod-subpath-test-preprovisionedpv-2t5m no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2t5m
Jul  6 19:32:03.952: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2t5m" in namespace "provisioning-9814"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":56,"skipped":522,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:04.494: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 213 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":57,"skipped":525,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Jul  6 19:32:11.030: INFO: PersistentVolumeClaim pvc-dkrlf found but phase is Pending instead of Bound.
Jul  6 19:32:13.062: INFO: PersistentVolumeClaim pvc-dkrlf found and phase=Bound (8.158676866s)
Jul  6 19:32:13.062: INFO: Waiting up to 3m0s for PersistentVolume local-bxf6j to have phase Bound
Jul  6 19:32:13.093: INFO: PersistentVolume local-bxf6j found and phase=Bound (30.475842ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-z84z
STEP: Creating a pod to test subpath
Jul  6 19:32:13.187: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-z84z" in namespace "provisioning-7558" to be "Succeeded or Failed"
Jul  6 19:32:13.217: INFO: Pod "pod-subpath-test-preprovisionedpv-z84z": Phase="Pending", Reason="", readiness=false. Elapsed: 30.532337ms
Jul  6 19:32:15.254: INFO: Pod "pod-subpath-test-preprovisionedpv-z84z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067193907s
Jul  6 19:32:17.286: INFO: Pod "pod-subpath-test-preprovisionedpv-z84z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099313563s
STEP: Saw pod success
Jul  6 19:32:17.286: INFO: Pod "pod-subpath-test-preprovisionedpv-z84z" satisfied condition "Succeeded or Failed"
Jul  6 19:32:17.318: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-z84z container test-container-subpath-preprovisionedpv-z84z: <nil>
STEP: delete the pod
Jul  6 19:32:17.386: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-z84z to disappear
Jul  6 19:32:17.418: INFO: Pod pod-subpath-test-preprovisionedpv-z84z no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-z84z
Jul  6 19:32:17.418: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-z84z" in namespace "provisioning-7558"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":38,"skipped":295,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:17.967: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":23,"skipped":191,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:32:06.793: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Jul  6 19:32:11.802: INFO: PersistentVolumeClaim pvc-mgs4h found but phase is Pending instead of Bound.
Jul  6 19:32:13.834: INFO: PersistentVolumeClaim pvc-mgs4h found and phase=Bound (4.103496804s)
Jul  6 19:32:13.834: INFO: Waiting up to 3m0s for PersistentVolume local-prnfq to have phase Bound
Jul  6 19:32:13.866: INFO: PersistentVolume local-prnfq found and phase=Bound (32.162756ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-h47s
STEP: Creating a pod to test subpath
Jul  6 19:32:13.963: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-h47s" in namespace "provisioning-1691" to be "Succeeded or Failed"
Jul  6 19:32:13.995: INFO: Pod "pod-subpath-test-preprovisionedpv-h47s": Phase="Pending", Reason="", readiness=false. Elapsed: 32.058214ms
Jul  6 19:32:16.028: INFO: Pod "pod-subpath-test-preprovisionedpv-h47s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065639572s
Jul  6 19:32:18.061: INFO: Pod "pod-subpath-test-preprovisionedpv-h47s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098315011s
STEP: Saw pod success
Jul  6 19:32:18.061: INFO: Pod "pod-subpath-test-preprovisionedpv-h47s" satisfied condition "Succeeded or Failed"
Jul  6 19:32:18.093: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-h47s container test-container-subpath-preprovisionedpv-h47s: <nil>
STEP: delete the pod
Jul  6 19:32:18.162: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-h47s to disappear
Jul  6 19:32:18.194: INFO: Pod pod-subpath-test-preprovisionedpv-h47s no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-h47s
Jul  6 19:32:18.194: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-h47s" in namespace "provisioning-1691"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":24,"skipped":191,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 107 lines ...
• [SLOW TEST:7.235 seconds]
[sig-node] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":531,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:22.488: INFO: Only supported for providers [azure] (not aws)
... skipping 144 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":21,"skipped":189,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:32:22.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  6 19:32:22.734: INFO: Waiting up to 5m0s for pod "pod-e8f5f347-a91e-49f4-8baf-ca7f7e7e3c30" in namespace "emptydir-6061" to be "Succeeded or Failed"
Jul  6 19:32:22.765: INFO: Pod "pod-e8f5f347-a91e-49f4-8baf-ca7f7e7e3c30": Phase="Pending", Reason="", readiness=false. Elapsed: 30.664883ms
Jul  6 19:32:24.797: INFO: Pod "pod-e8f5f347-a91e-49f4-8baf-ca7f7e7e3c30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062759308s
STEP: Saw pod success
Jul  6 19:32:24.797: INFO: Pod "pod-e8f5f347-a91e-49f4-8baf-ca7f7e7e3c30" satisfied condition "Succeeded or Failed"
Jul  6 19:32:24.828: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-e8f5f347-a91e-49f4-8baf-ca7f7e7e3c30 container test-container: <nil>
STEP: delete the pod
Jul  6 19:32:24.897: INFO: Waiting for pod pod-e8f5f347-a91e-49f4-8baf-ca7f7e7e3c30 to disappear
Jul  6 19:32:24.928: INFO: Pod pod-e8f5f347-a91e-49f4-8baf-ca7f7e7e3c30 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:32:24.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6061" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":549,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:25.009: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 84 lines ...
Jul  6 19:32:25.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  6 19:32:25.215: INFO: Waiting up to 5m0s for pod "pod-ab767372-dca1-4bac-9e0c-af4649ccfe58" in namespace "emptydir-4893" to be "Succeeded or Failed"
Jul  6 19:32:25.246: INFO: Pod "pod-ab767372-dca1-4bac-9e0c-af4649ccfe58": Phase="Pending", Reason="", readiness=false. Elapsed: 30.456101ms
Jul  6 19:32:27.277: INFO: Pod "pod-ab767372-dca1-4bac-9e0c-af4649ccfe58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061823758s
Jul  6 19:32:29.309: INFO: Pod "pod-ab767372-dca1-4bac-9e0c-af4649ccfe58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094122652s
STEP: Saw pod success
Jul  6 19:32:29.309: INFO: Pod "pod-ab767372-dca1-4bac-9e0c-af4649ccfe58" satisfied condition "Succeeded or Failed"
Jul  6 19:32:29.340: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-ab767372-dca1-4bac-9e0c-af4649ccfe58 container test-container: <nil>
STEP: delete the pod
Jul  6 19:32:29.410: INFO: Waiting for pod pod-ab767372-dca1-4bac-9e0c-af4649ccfe58 to disappear
Jul  6 19:32:29.441: INFO: Pod pod-ab767372-dca1-4bac-9e0c-af4649ccfe58 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:32:29.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4893" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":555,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:29.540: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 63 lines ...
• [SLOW TEST:5.041 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":203,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:30.795: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 142 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317
    should not require VolumeAttach for drivers without attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":27,"skipped":221,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:33.611: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 206 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":22,"skipped":219,"failed":2,"failures":["[sig-network] DNS should provide DNS for services  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:34.287: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Jul  6 19:32:27.550: INFO: PersistentVolumeClaim pvc-srqmt found but phase is Pending instead of Bound.
Jul  6 19:32:29.581: INFO: PersistentVolumeClaim pvc-srqmt found and phase=Bound (8.154950562s)
Jul  6 19:32:29.581: INFO: Waiting up to 3m0s for PersistentVolume local-q8sbz to have phase Bound
Jul  6 19:32:29.616: INFO: PersistentVolume local-q8sbz found and phase=Bound (35.511821ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7g2v
STEP: Creating a pod to test subpath
Jul  6 19:32:29.714: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7g2v" in namespace "provisioning-5993" to be "Succeeded or Failed"
Jul  6 19:32:29.745: INFO: Pod "pod-subpath-test-preprovisionedpv-7g2v": Phase="Pending", Reason="", readiness=false. Elapsed: 30.496318ms
Jul  6 19:32:31.776: INFO: Pod "pod-subpath-test-preprovisionedpv-7g2v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061690016s
Jul  6 19:32:33.808: INFO: Pod "pod-subpath-test-preprovisionedpv-7g2v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093985728s
STEP: Saw pod success
Jul  6 19:32:33.808: INFO: Pod "pod-subpath-test-preprovisionedpv-7g2v" satisfied condition "Succeeded or Failed"
Jul  6 19:32:33.839: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-7g2v container test-container-subpath-preprovisionedpv-7g2v: <nil>
STEP: delete the pod
Jul  6 19:32:33.908: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7g2v to disappear
Jul  6 19:32:33.939: INFO: Pod pod-subpath-test-preprovisionedpv-7g2v no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7g2v
Jul  6 19:32:33.939: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7g2v" in namespace "provisioning-5993"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":39,"skipped":304,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 10 lines ...
Jul  6 19:30:41.990: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-129662kmh      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1296    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-129662kmh,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1296    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-129662kmh,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1296    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-129662kmh,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-129662kmh    a7379cb1-7a0e-43cc-9aaa-becb5fb43350 41599 0 2021-07-06 19:30:42 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-07-06 19:30:42 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-ss4qp pvc- provisioning-1296  e56c4c9e-5f49-4233-b185-624c02e12809 41600 0 2021-07-06 19:30:42 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-07-06 19:30:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-129662kmh,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-ab7317ef-ffaa-4780-a78d-00abe4ed883e in namespace provisioning-1296
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Jul  6 19:32:14.495: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-dfp8w" in namespace "provisioning-1296" to be "Succeeded or Failed"
Jul  6 19:32:14.526: INFO: Pod "pvc-volume-tester-writer-dfp8w": Phase="Pending", Reason="", readiness=false. Elapsed: 31.071873ms
Jul  6 19:32:16.557: INFO: Pod "pvc-volume-tester-writer-dfp8w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062261687s
STEP: Saw pod success
Jul  6 19:32:16.558: INFO: Pod "pvc-volume-tester-writer-dfp8w" satisfied condition "Succeeded or Failed"
Jul  6 19:32:16.625: INFO: Pod pvc-volume-tester-writer-dfp8w has the following logs: 
Jul  6 19:32:16.625: INFO: Deleting pod "pvc-volume-tester-writer-dfp8w" in namespace "provisioning-1296"
Jul  6 19:32:16.659: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-dfp8w" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-51-240.ca-central-1.compute.internal"
Jul  6 19:32:16.784: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-72g4v" in namespace "provisioning-1296" to be "Succeeded or Failed"
Jul  6 19:32:16.815: INFO: Pod "pvc-volume-tester-reader-72g4v": Phase="Pending", Reason="", readiness=false. Elapsed: 30.90599ms
Jul  6 19:32:18.846: INFO: Pod "pvc-volume-tester-reader-72g4v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062539647s
Jul  6 19:32:20.878: INFO: Pod "pvc-volume-tester-reader-72g4v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093890082s
Jul  6 19:32:22.910: INFO: Pod "pvc-volume-tester-reader-72g4v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126286132s
STEP: Saw pod success
Jul  6 19:32:22.910: INFO: Pod "pvc-volume-tester-reader-72g4v" satisfied condition "Succeeded or Failed"
Jul  6 19:32:22.984: INFO: Pod pvc-volume-tester-reader-72g4v has the following logs: hello world

Jul  6 19:32:22.984: INFO: Deleting pod "pvc-volume-tester-reader-72g4v" in namespace "provisioning-1296"
Jul  6 19:32:23.023: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-72g4v" to be fully deleted
Jul  6 19:32:23.059: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-ss4qp] to have phase Bound
Jul  6 19:32:23.092: INFO: PersistentVolumeClaim pvc-ss4qp found and phase=Bound (32.421821ms)
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":41,"skipped":279,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Jul  6 19:31:38.951: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7229k46bf
STEP: creating a claim
Jul  6 19:31:38.983: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-tcmm
STEP: Creating a pod to test subpath
Jul  6 19:31:39.083: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-tcmm" in namespace "provisioning-7229" to be "Succeeded or Failed"
Jul  6 19:31:39.114: INFO: Pod "pod-subpath-test-dynamicpv-tcmm": Phase="Pending", Reason="", readiness=false. Elapsed: 31.515993ms
Jul  6 19:31:41.146: INFO: Pod "pod-subpath-test-dynamicpv-tcmm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063321708s
Jul  6 19:31:43.179: INFO: Pod "pod-subpath-test-dynamicpv-tcmm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096137927s
Jul  6 19:31:45.212: INFO: Pod "pod-subpath-test-dynamicpv-tcmm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128955934s
Jul  6 19:31:47.243: INFO: Pod "pod-subpath-test-dynamicpv-tcmm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160751432s
Jul  6 19:31:49.276: INFO: Pod "pod-subpath-test-dynamicpv-tcmm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.193471587s
... skipping 12 lines ...
Jul  6 19:32:15.707: INFO: Pod "pod-subpath-test-dynamicpv-tcmm": Phase="Pending", Reason="", readiness=false. Elapsed: 36.624300223s
Jul  6 19:32:17.740: INFO: Pod "pod-subpath-test-dynamicpv-tcmm": Phase="Pending", Reason="", readiness=false. Elapsed: 38.657097801s
Jul  6 19:32:19.772: INFO: Pod "pod-subpath-test-dynamicpv-tcmm": Phase="Pending", Reason="", readiness=false. Elapsed: 40.689665726s
Jul  6 19:32:21.807: INFO: Pod "pod-subpath-test-dynamicpv-tcmm": Phase="Pending", Reason="", readiness=false. Elapsed: 42.72385775s
Jul  6 19:32:23.840: INFO: Pod "pod-subpath-test-dynamicpv-tcmm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.757004492s
STEP: Saw pod success
Jul  6 19:32:23.840: INFO: Pod "pod-subpath-test-dynamicpv-tcmm" satisfied condition "Succeeded or Failed"
Jul  6 19:32:23.872: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-tcmm container test-container-volume-dynamicpv-tcmm: <nil>
STEP: delete the pod
Jul  6 19:32:23.941: INFO: Waiting for pod pod-subpath-test-dynamicpv-tcmm to disappear
Jul  6 19:32:23.972: INFO: Pod pod-subpath-test-dynamicpv-tcmm no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-tcmm
Jul  6 19:32:23.972: INFO: Deleting pod "pod-subpath-test-dynamicpv-tcmm" in namespace "provisioning-7229"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":31,"skipped":232,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:8.001 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":40,"skipped":307,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:473
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":28,"skipped":241,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:45.718: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 73 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-0a31273b-51cf-46ef-abce-49c47c50c8be
STEP: Creating a pod to test consume configMaps
Jul  6 19:32:43.571: INFO: Waiting up to 5m0s for pod "pod-configmaps-ecc67c24-2c6b-45bb-a424-161664c004fc" in namespace "configmap-7304" to be "Succeeded or Failed"
Jul  6 19:32:43.603: INFO: Pod "pod-configmaps-ecc67c24-2c6b-45bb-a424-161664c004fc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.130972ms
Jul  6 19:32:45.635: INFO: Pod "pod-configmaps-ecc67c24-2c6b-45bb-a424-161664c004fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063985285s
Jul  6 19:32:47.667: INFO: Pod "pod-configmaps-ecc67c24-2c6b-45bb-a424-161664c004fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095425067s
STEP: Saw pod success
Jul  6 19:32:47.667: INFO: Pod "pod-configmaps-ecc67c24-2c6b-45bb-a424-161664c004fc" satisfied condition "Succeeded or Failed"
Jul  6 19:32:47.697: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-configmaps-ecc67c24-2c6b-45bb-a424-161664c004fc container agnhost-container: <nil>
STEP: delete the pod
Jul  6 19:32:47.773: INFO: Waiting for pod pod-configmaps-ecc67c24-2c6b-45bb-a424-161664c004fc to disappear
Jul  6 19:32:47.804: INFO: Pod pod-configmaps-ecc67c24-2c6b-45bb-a424-161664c004fc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:32:47.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7304" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":41,"skipped":309,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:47.883: INFO: Only supported for providers [gce gke] (not aws)
... skipping 28 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul  6 19:32:24.700: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 19:32:24.733: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-bfg8
STEP: Creating a pod to test atomic-volume-subpath
Jul  6 19:32:24.774: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-bfg8" in namespace "provisioning-5504" to be "Succeeded or Failed"
Jul  6 19:32:24.806: INFO: Pod "pod-subpath-test-inlinevolume-bfg8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.538821ms
Jul  6 19:32:26.838: INFO: Pod "pod-subpath-test-inlinevolume-bfg8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063170848s
Jul  6 19:32:28.871: INFO: Pod "pod-subpath-test-inlinevolume-bfg8": Phase="Running", Reason="", readiness=true. Elapsed: 4.095993046s
Jul  6 19:32:30.902: INFO: Pod "pod-subpath-test-inlinevolume-bfg8": Phase="Running", Reason="", readiness=true. Elapsed: 6.127565913s
Jul  6 19:32:32.935: INFO: Pod "pod-subpath-test-inlinevolume-bfg8": Phase="Running", Reason="", readiness=true. Elapsed: 8.160592143s
Jul  6 19:32:34.967: INFO: Pod "pod-subpath-test-inlinevolume-bfg8": Phase="Running", Reason="", readiness=true. Elapsed: 10.192715284s
... skipping 2 lines ...
Jul  6 19:32:41.066: INFO: Pod "pod-subpath-test-inlinevolume-bfg8": Phase="Running", Reason="", readiness=true. Elapsed: 16.291446333s
Jul  6 19:32:43.098: INFO: Pod "pod-subpath-test-inlinevolume-bfg8": Phase="Running", Reason="", readiness=true. Elapsed: 18.323923491s
Jul  6 19:32:45.134: INFO: Pod "pod-subpath-test-inlinevolume-bfg8": Phase="Running", Reason="", readiness=true. Elapsed: 20.35975556s
Jul  6 19:32:47.166: INFO: Pod "pod-subpath-test-inlinevolume-bfg8": Phase="Running", Reason="", readiness=true. Elapsed: 22.391087375s
Jul  6 19:32:49.198: INFO: Pod "pod-subpath-test-inlinevolume-bfg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.423793231s
STEP: Saw pod success
Jul  6 19:32:49.198: INFO: Pod "pod-subpath-test-inlinevolume-bfg8" satisfied condition "Succeeded or Failed"
Jul  6 19:32:49.230: INFO: Trying to get logs from node ip-172-20-56-177.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-bfg8 container test-container-subpath-inlinevolume-bfg8: <nil>
STEP: delete the pod
Jul  6 19:32:49.300: INFO: Waiting for pod pod-subpath-test-inlinevolume-bfg8 to disappear
Jul  6 19:32:49.331: INFO: Pod pod-subpath-test-inlinevolume-bfg8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-bfg8
Jul  6 19:32:49.331: INFO: Deleting pod "pod-subpath-test-inlinevolume-bfg8" in namespace "provisioning-5504"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":22,"skipped":190,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:49.475: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 79 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 25 lines ...
Jul  6 19:32:41.620: INFO: PersistentVolumeClaim pvc-q9sxr found but phase is Pending instead of Bound.
Jul  6 19:32:43.652: INFO: PersistentVolumeClaim pvc-q9sxr found and phase=Bound (10.195258623s)
Jul  6 19:32:43.652: INFO: Waiting up to 3m0s for PersistentVolume local-xs29v to have phase Bound
Jul  6 19:32:43.683: INFO: PersistentVolume local-xs29v found and phase=Bound (31.310519ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6psj
STEP: Creating a pod to test subpath
Jul  6 19:32:43.778: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6psj" in namespace "provisioning-8764" to be "Succeeded or Failed"
Jul  6 19:32:43.810: INFO: Pod "pod-subpath-test-preprovisionedpv-6psj": Phase="Pending", Reason="", readiness=false. Elapsed: 31.365247ms
Jul  6 19:32:45.842: INFO: Pod "pod-subpath-test-preprovisionedpv-6psj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064231901s
Jul  6 19:32:47.875: INFO: Pod "pod-subpath-test-preprovisionedpv-6psj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096257485s
Jul  6 19:32:49.907: INFO: Pod "pod-subpath-test-preprovisionedpv-6psj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128562786s
STEP: Saw pod success
Jul  6 19:32:49.907: INFO: Pod "pod-subpath-test-preprovisionedpv-6psj" satisfied condition "Succeeded or Failed"
Jul  6 19:32:49.938: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-6psj container test-container-subpath-preprovisionedpv-6psj: <nil>
STEP: delete the pod
Jul  6 19:32:50.015: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6psj to disappear
Jul  6 19:32:50.047: INFO: Pod pod-subpath-test-preprovisionedpv-6psj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6psj
Jul  6 19:32:50.047: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6psj" in namespace "provisioning-8764"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":26,"skipped":210,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:50.839: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 30 lines ...
Jul  6 19:31:14.210: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1740jf2mr
STEP: creating a claim
Jul  6 19:31:14.242: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-kgkj
STEP: Creating a pod to test subpath
Jul  6 19:31:14.337: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-kgkj" in namespace "provisioning-1740" to be "Succeeded or Failed"
Jul  6 19:31:14.368: INFO: Pod "pod-subpath-test-dynamicpv-kgkj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.790886ms
Jul  6 19:31:16.400: INFO: Pod "pod-subpath-test-dynamicpv-kgkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062585274s
Jul  6 19:31:18.431: INFO: Pod "pod-subpath-test-dynamicpv-kgkj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093865933s
Jul  6 19:31:20.463: INFO: Pod "pod-subpath-test-dynamicpv-kgkj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126172256s
Jul  6 19:31:22.496: INFO: Pod "pod-subpath-test-dynamicpv-kgkj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158647172s
Jul  6 19:31:24.527: INFO: Pod "pod-subpath-test-dynamicpv-kgkj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.189796943s
... skipping 33 lines ...
Jul  6 19:32:33.617: INFO: Pod "pod-subpath-test-dynamicpv-kgkj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.279938551s
Jul  6 19:32:35.649: INFO: Pod "pod-subpath-test-dynamicpv-kgkj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.311614156s
Jul  6 19:32:37.681: INFO: Pod "pod-subpath-test-dynamicpv-kgkj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.343762001s
Jul  6 19:32:39.713: INFO: Pod "pod-subpath-test-dynamicpv-kgkj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.375469486s
Jul  6 19:32:41.745: INFO: Pod "pod-subpath-test-dynamicpv-kgkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m27.407582564s
STEP: Saw pod success
Jul  6 19:32:41.745: INFO: Pod "pod-subpath-test-dynamicpv-kgkj" satisfied condition "Succeeded or Failed"
Jul  6 19:32:41.776: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-kgkj container test-container-subpath-dynamicpv-kgkj: <nil>
STEP: delete the pod
Jul  6 19:32:41.853: INFO: Waiting for pod pod-subpath-test-dynamicpv-kgkj to disappear
Jul  6 19:32:41.884: INFO: Pod pod-subpath-test-dynamicpv-kgkj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-kgkj
Jul  6 19:32:41.884: INFO: Deleting pod "pod-subpath-test-dynamicpv-kgkj" in namespace "provisioning-1740"
... skipping 28 lines ...
Jul  6 19:32:47.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul  6 19:32:48.112: INFO: Waiting up to 5m0s for pod "downward-api-200f532c-02c4-4cf7-a024-7aebd9dd9372" in namespace "downward-api-1542" to be "Succeeded or Failed"
Jul  6 19:32:48.143: INFO: Pod "downward-api-200f532c-02c4-4cf7-a024-7aebd9dd9372": Phase="Pending", Reason="", readiness=false. Elapsed: 30.509982ms
Jul  6 19:32:50.174: INFO: Pod "downward-api-200f532c-02c4-4cf7-a024-7aebd9dd9372": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061798002s
Jul  6 19:32:52.205: INFO: Pod "downward-api-200f532c-02c4-4cf7-a024-7aebd9dd9372": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092551619s
STEP: Saw pod success
Jul  6 19:32:52.205: INFO: Pod "downward-api-200f532c-02c4-4cf7-a024-7aebd9dd9372" satisfied condition "Succeeded or Failed"
Jul  6 19:32:52.236: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod downward-api-200f532c-02c4-4cf7-a024-7aebd9dd9372 container dapi-container: <nil>
STEP: delete the pod
Jul  6 19:32:52.305: INFO: Waiting for pod downward-api-200f532c-02c4-4cf7-a024-7aebd9dd9372 to disappear
Jul  6 19:32:52.336: INFO: Pod downward-api-200f532c-02c4-4cf7-a024-7aebd9dd9372 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:32:52.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1542" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":313,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:52.408: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-73fd8139-f1be-48b6-8397-d9e8c0ae349e
STEP: Creating a pod to test consume configMaps
Jul  6 19:32:51.070: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9f65be76-6f34-4eda-87a5-916692c89e57" in namespace "projected-7354" to be "Succeeded or Failed"
Jul  6 19:32:51.101: INFO: Pod "pod-projected-configmaps-9f65be76-6f34-4eda-87a5-916692c89e57": Phase="Pending", Reason="", readiness=false. Elapsed: 31.206864ms
Jul  6 19:32:53.133: INFO: Pod "pod-projected-configmaps-9f65be76-6f34-4eda-87a5-916692c89e57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062997742s
STEP: Saw pod success
Jul  6 19:32:53.133: INFO: Pod "pod-projected-configmaps-9f65be76-6f34-4eda-87a5-916692c89e57" satisfied condition "Succeeded or Failed"
Jul  6 19:32:53.167: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-projected-configmaps-9f65be76-6f34-4eda-87a5-916692c89e57 container agnhost-container: <nil>
STEP: delete the pod
Jul  6 19:32:53.236: INFO: Waiting for pod pod-projected-configmaps-9f65be76-6f34-4eda-87a5-916692c89e57 to disappear
Jul  6 19:32:53.268: INFO: Pod pod-projected-configmaps-9f65be76-6f34-4eda-87a5-916692c89e57 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:32:53.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7354" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":213,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:53.361: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 18 lines ...
Jul  6 19:14:07.618: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:14:37.650: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:15:07.681: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:15:37.714: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:16:07.746: INFO: Unable to read jessie_udp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:16:37.781: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:16:37.781: INFO: Lookups using dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:17:12.816: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:17:42.848: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:18:12.879: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:18:42.912: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:19:12.944: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:19:42.975: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:20:13.007: INFO: Unable to read jessie_udp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:20:43.038: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:20:43.038: INFO: Lookups using dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:21:17.815: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:21:47.848: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:22:17.881: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:22:47.913: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:23:17.945: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:23:47.976: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:24:18.008: INFO: Unable to read jessie_udp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:24:48.040: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:24:48.040: INFO: Lookups using dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:25:22.813: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:25:52.846: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:26:22.878: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:26:52.911: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:27:22.948: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:27:52.984: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:28:23.030: INFO: Unable to read jessie_udp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:28:53.062: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:28:53.062: INFO: Lookups using dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:29:23.095: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:29:53.127: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:30:23.160: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:30:53.192: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:31:23.227: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:31:53.259: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:32:23.291: INFO: Unable to read jessie_udp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:32:53.324: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635: the server is currently unable to handle the request (get pods dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635)
Jul  6 19:32:53.324: INFO: Lookups using dns-4860/dns-test-f2fbcc47-f8b3-4569-9e8d-049b618c3635 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:32:53.324: FAIL: Unexpected error:
    <*errors.errorString | 0xc000240240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 274 lines ...
• Failure [1222.057 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:32:53.324: Unexpected error:
      <*errors.errorString | 0xc000240240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":7,"skipped":60,"failed":3,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:55.301: INFO: Only supported for providers [openstack] (not aws)
... skipping 44 lines ...
Jul  6 19:32:52.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul  6 19:32:52.606: INFO: Waiting up to 5m0s for pod "pod-5d1c6040-fecb-419a-9735-55bbe99efc40" in namespace "emptydir-4229" to be "Succeeded or Failed"
Jul  6 19:32:52.636: INFO: Pod "pod-5d1c6040-fecb-419a-9735-55bbe99efc40": Phase="Pending", Reason="", readiness=false. Elapsed: 30.484634ms
Jul  6 19:32:54.668: INFO: Pod "pod-5d1c6040-fecb-419a-9735-55bbe99efc40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061522198s
Jul  6 19:32:56.699: INFO: Pod "pod-5d1c6040-fecb-419a-9735-55bbe99efc40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092644922s
STEP: Saw pod success
Jul  6 19:32:56.699: INFO: Pod "pod-5d1c6040-fecb-419a-9735-55bbe99efc40" satisfied condition "Succeeded or Failed"
Jul  6 19:32:56.729: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod pod-5d1c6040-fecb-419a-9735-55bbe99efc40 container test-container: <nil>
STEP: delete the pod
Jul  6 19:32:56.798: INFO: Waiting for pod pod-5d1c6040-fecb-419a-9735-55bbe99efc40 to disappear
Jul  6 19:32:56.829: INFO: Pod pod-5d1c6040-fecb-419a-9735-55bbe99efc40 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:32:56.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4229" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":314,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:56.903: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 92 lines ...
Jul  6 19:32:41.725: INFO: Waiting for pod aws-client to disappear
Jul  6 19:32:41.757: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Jul  6 19:32:41.757: INFO: Deleting PersistentVolumeClaim "pvc-dgjn9"
Jul  6 19:32:41.790: INFO: Deleting PersistentVolume "aws-p75km"
Jul  6 19:32:42.129: INFO: Couldn't delete PD "aws://ca-central-1a/vol-01acecdbb3fe1151b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01acecdbb3fe1151b is currently attached to i-005381dcd763db5a9
	status code: 400, request id: 1f33c43f-bb29-4e2d-bc09-decf6c43f66f
Jul  6 19:32:47.398: INFO: Couldn't delete PD "aws://ca-central-1a/vol-01acecdbb3fe1151b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01acecdbb3fe1151b is currently attached to i-005381dcd763db5a9
	status code: 400, request id: e3bedc67-7909-43db-834b-620ccf2f5012
Jul  6 19:32:52.652: INFO: Couldn't delete PD "aws://ca-central-1a/vol-01acecdbb3fe1151b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01acecdbb3fe1151b is currently attached to i-005381dcd763db5a9
	status code: 400, request id: 91fd8a86-ddb3-4be7-9b33-ebc23043a5b5
Jul  6 19:32:57.901: INFO: Successfully deleted PD "aws://ca-central-1a/vol-01acecdbb3fe1151b".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:32:57.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3193" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":27,"skipped":228,"failed":1,"failures":["[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:32:57.999: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 237 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106
STEP: Creating a pod to test downward API volume plugin
Jul  6 19:33:00.667: INFO: Waiting up to 5m0s for pod "metadata-volume-ffe2569e-cfc3-484a-85cd-db4c43409d3b" in namespace "projected-3389" to be "Succeeded or Failed"
Jul  6 19:33:00.698: INFO: Pod "metadata-volume-ffe2569e-cfc3-484a-85cd-db4c43409d3b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.444758ms
Jul  6 19:33:02.729: INFO: Pod "metadata-volume-ffe2569e-cfc3-484a-85cd-db4c43409d3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061892458s
STEP: Saw pod success
Jul  6 19:33:02.729: INFO: Pod "metadata-volume-ffe2569e-cfc3-484a-85cd-db4c43409d3b" satisfied condition "Succeeded or Failed"
Jul  6 19:33:02.760: INFO: Trying to get logs from node ip-172-20-51-240.ca-central-1.compute.internal pod metadata-volume-ffe2569e-cfc3-484a-85cd-db4c43409d3b container client-container: <nil>
STEP: delete the pod
Jul  6 19:33:02.830: INFO: Waiting for pod metadata-volume-ffe2569e-cfc3-484a-85cd-db4c43409d3b to disappear
Jul  6 19:33:02.861: INFO: Pod metadata-volume-ffe2569e-cfc3-484a-85cd-db4c43409d3b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:33:02.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3389" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":44,"skipped":331,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":29,"skipped":195,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:33:03.049: INFO: Only supported for providers [openstack] (not aws)
... skipping 28 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
Jul  6 19:32:58.225: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 19:32:58.257: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-w7cx
STEP: Creating a pod to test subpath
Jul  6 19:32:58.293: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-w7cx" in namespace "provisioning-8112" to be "Succeeded or Failed"
Jul  6 19:32:58.324: INFO: Pod "pod-subpath-test-inlinevolume-w7cx": Phase="Pending", Reason="", readiness=false. Elapsed: 31.455755ms
Jul  6 19:33:00.357: INFO: Pod "pod-subpath-test-inlinevolume-w7cx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063844287s
Jul  6 19:33:02.389: INFO: Pod "pod-subpath-test-inlinevolume-w7cx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096629739s
Jul  6 19:33:04.425: INFO: Pod "pod-subpath-test-inlinevolume-w7cx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131938041s
STEP: Saw pod success
Jul  6 19:33:04.425: INFO: Pod "pod-subpath-test-inlinevolume-w7cx" satisfied condition "Succeeded or Failed"
Jul  6 19:33:04.456: INFO: Trying to get logs from node ip-172-20-61-241.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-w7cx container test-container-subpath-inlinevolume-w7cx: <nil>
STEP: delete the pod
Jul  6 19:33:04.524: INFO: Waiting for pod pod-subpath-test-inlinevolume-w7cx to disappear
Jul  6 19:33:04.555: INFO: Pod pod-subpath-test-inlinevolume-w7cx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-w7cx
Jul  6 19:33:04.555: INFO: Deleting pod "pod-subpath-test-inlinevolume-w7cx" in namespace "provisioning-8112"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":28,"skipped":240,"failed":1,"failures":["[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 15 lines ...
Jul  6 19:32:56.074: INFO: PersistentVolumeClaim pvc-kh7mh found but phase is Pending instead of Bound.
Jul  6 19:32:58.131: INFO: PersistentVolumeClaim pvc-kh7mh found and phase=Bound (2.088887982s)
Jul  6 19:32:58.131: INFO: Waiting up to 3m0s for PersistentVolume local-5th9x to have phase Bound
Jul  6 19:32:58.163: INFO: PersistentVolume local-5th9x found and phase=Bound (31.538563ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xhc8
STEP: Creating a pod to test subpath
Jul  6 19:32:58.260: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xhc8" in namespace "provisioning-5672" to be "Succeeded or Failed"
Jul  6 19:32:58.294: INFO: Pod "pod-subpath-test-preprovisionedpv-xhc8": Phase="Pending", Reason="", readiness=false. Elapsed: 33.138671ms
Jul  6 19:33:00.326: INFO: Pod "pod-subpath-test-preprovisionedpv-xhc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065879681s
Jul  6 19:33:02.359: INFO: Pod "pod-subpath-test-preprovisionedpv-xhc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098974478s
Jul  6 19:33:04.391: INFO: Pod "pod-subpath-test-preprovisionedpv-xhc8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130983101s
Jul  6 19:33:06.424: INFO: Pod "pod-subpath-test-preprovisionedpv-xhc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.163176422s
STEP: Saw pod success
Jul  6 19:33:06.424: INFO: Pod "pod-subpath-test-preprovisionedpv-xhc8" satisfied condition "Succeeded or Failed"
Jul  6 19:33:06.456: INFO: Trying to get logs from node ip-172-20-61-241.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-xhc8 container test-container-subpath-preprovisionedpv-xhc8: <nil>
STEP: delete the pod
Jul  6 19:33:06.527: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xhc8 to disappear
Jul  6 19:33:06.558: INFO: Pod pod-subpath-test-preprovisionedpv-xhc8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xhc8
Jul  6 19:33:06.558: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xhc8" in namespace "provisioning-5672"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":28,"skipped":219,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:33:07.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  6 19:33:07.323: INFO: Waiting up to 5m0s for pod "pod-88dd89c5-9d4e-4e6c-994f-3fb6ef76efc5" in namespace "emptydir-6133" to be "Succeeded or Failed"
Jul  6 19:33:07.355: INFO: Pod "pod-88dd89c5-9d4e-4e6c-994f-3fb6ef76efc5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.831206ms
Jul  6 19:33:09.388: INFO: Pod "pod-88dd89c5-9d4e-4e6c-994f-3fb6ef76efc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06421665s
STEP: Saw pod success
Jul  6 19:33:09.388: INFO: Pod "pod-88dd89c5-9d4e-4e6c-994f-3fb6ef76efc5" satisfied condition "Succeeded or Failed"
Jul  6 19:33:09.419: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-88dd89c5-9d4e-4e6c-994f-3fb6ef76efc5 container test-container: <nil>
STEP: delete the pod
Jul  6 19:33:09.490: INFO: Waiting for pod pod-88dd89c5-9d4e-4e6c-994f-3fb6ef76efc5 to disappear
Jul  6 19:33:09.521: INFO: Pod pod-88dd89c5-9d4e-4e6c-994f-3fb6ef76efc5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:33:09.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6133" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":223,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:33:09.607: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 168 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":45,"skipped":337,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:33:11.206: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 191 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":61,"skipped":566,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":57,"skipped":500,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:32:52.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
Jul  6 19:32:52.396: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 19:32:52.462: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8553" in namespace "provisioning-8553" to be "Succeeded or Failed"
Jul  6 19:32:52.493: INFO: Pod "hostpath-symlink-prep-provisioning-8553": Phase="Pending", Reason="", readiness=false. Elapsed: 30.630454ms
Jul  6 19:32:54.525: INFO: Pod "hostpath-symlink-prep-provisioning-8553": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062673089s
STEP: Saw pod success
Jul  6 19:32:54.525: INFO: Pod "hostpath-symlink-prep-provisioning-8553" satisfied condition "Succeeded or Failed"
Jul  6 19:32:54.525: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8553" in namespace "provisioning-8553"
Jul  6 19:32:54.561: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8553" to be fully deleted
Jul  6 19:32:54.595: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-th95
Jul  6 19:32:56.689: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-8553 exec pod-subpath-test-inlinevolume-th95 --container test-container-volume-inlinevolume-th95 -- /bin/sh -c rm -r /test-volume/provisioning-8553'
Jul  6 19:32:57.210: INFO: stderr: ""
Jul  6 19:32:57.210: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-th95
Jul  6 19:32:57.210: INFO: Deleting pod "pod-subpath-test-inlinevolume-th95" in namespace "provisioning-8553"
Jul  6 19:32:57.280: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-th95" to be fully deleted
STEP: Deleting pod
Jul  6 19:33:11.344: INFO: Deleting pod "pod-subpath-test-inlinevolume-th95" in namespace "provisioning-8553"
Jul  6 19:33:11.407: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8553" in namespace "provisioning-8553" to be "Succeeded or Failed"
Jul  6 19:33:11.438: INFO: Pod "hostpath-symlink-prep-provisioning-8553": Phase="Pending", Reason="", readiness=false. Elapsed: 31.557358ms
Jul  6 19:33:13.470: INFO: Pod "hostpath-symlink-prep-provisioning-8553": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063650146s
STEP: Saw pod success
Jul  6 19:33:13.470: INFO: Pod "hostpath-symlink-prep-provisioning-8553" satisfied condition "Succeeded or Failed"
Jul  6 19:33:13.470: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8553" in namespace "provisioning-8553"
Jul  6 19:33:13.505: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8553" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:33:13.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8553" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":58,"skipped":500,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:33:13.621: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 159 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:33:14.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3720" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":59,"skipped":528,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:33:14.960: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:33:15.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-6205" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":60,"skipped":533,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:33:15.258: INFO: Only supported for providers [azure] (not aws)
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:33:15.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3610" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":61,"skipped":542,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":23,"skipped":208,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:33:17.662: INFO: Only supported for providers [gce gke] (not aws)
... skipping 23 lines ...
Jul  6 19:33:17.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  6 19:33:17.875: INFO: Waiting up to 5m0s for pod "pod-aee0eebf-3e9f-4d50-b874-e044b58917f4" in namespace "emptydir-6330" to be "Succeeded or Failed"
Jul  6 19:33:17.907: INFO: Pod "pod-aee0eebf-3e9f-4d50-b874-e044b58917f4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.277112ms
Jul  6 19:33:19.938: INFO: Pod "pod-aee0eebf-3e9f-4d50-b874-e044b58917f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06270877s
STEP: Saw pod success
Jul  6 19:33:19.938: INFO: Pod "pod-aee0eebf-3e9f-4d50-b874-e044b58917f4" satisfied condition "Succeeded or Failed"
Jul  6 19:33:19.970: INFO: Trying to get logs from node ip-172-20-61-17.ca-central-1.compute.internal pod pod-aee0eebf-3e9f-4d50-b874-e044b58917f4 container test-container: <nil>
STEP: delete the pod
Jul  6 19:33:20.048: INFO: Waiting for pod pod-aee0eebf-3e9f-4d50-b874-e044b58917f4 to disappear
Jul  6 19:33:20.083: INFO: Pod pod-aee0eebf-3e9f-4d50-b874-e044b58917f4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 26 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":29,"skipped":265,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:33:22.158: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 104 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:33:23.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2365" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":30,"skipped":272,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 19:33:23.101: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 19:33:23.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7544" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":31,"skipped":277,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access"]}
Jul  6 19:33:23.647: INFO: Running AfterSuite actions on all nodes
Jul  6 19:33:23.647: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:33:23.647: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:33:23.647: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:33:23.647: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:33:23.647: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 26 lines ...
• [SLOW TEST:17.476 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":62,"skipped":544,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries"]}
Jul  6 19:33:33.191: INFO: Running AfterSuite actions on all nodes
Jul  6 19:33:33.191: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:33:33.191: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:33:33.191: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:33:33.191: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:33:33.191: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":45,"skipped":287,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}
Jul  6 19:33:35.301: INFO: Running AfterSuite actions on all nodes
Jul  6 19:33:35.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:33:35.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:33:35.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:33:35.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:33:35.301: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 175 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":22,"skipped":251,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","[sig-network] Services should serve multiport endpoints from pods  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:31:35.447: INFO: >>> kubeConfig: /root/.kube/config
... skipping 127 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":23,"skipped":251,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","[sig-network] Services should serve multiport endpoints from pods  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
Jul  6 19:33:55.312: INFO: Running AfterSuite actions on all nodes
Jul  6 19:33:55.312: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:33:55.312: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:33:55.312: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:33:55.312: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:33:55.312: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 22 lines ...
• [SLOW TEST:50.350 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":62,"skipped":568,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
Jul  6 19:34:01.992: INFO: Running AfterSuite actions on all nodes
Jul  6 19:34:01.992: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:34:01.992: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:34:01.992: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:34:01.992: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:34:01.992: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  6 19:34:01.992: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  6 19:34:01.992: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":42,"skipped":289,"failed":3,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted."]}
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:29:17.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 233 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should provide basic identity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:128
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","total":-1,"completed":43,"skipped":289,"failed":3,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted."]}
Jul  6 19:34:03.081: INFO: Running AfterSuite actions on all nodes
Jul  6 19:34:03.081: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:34:03.081: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:34:03.081: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:34:03.081: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:34:03.081: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 7 lines ...
Jul  6 19:32:34.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  6 19:32:34.518: INFO: created pod
Jul  6 19:32:34.518: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-9992" to be "Succeeded or Failed"
Jul  6 19:32:34.550: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 31.37299ms
Jul  6 19:32:36.581: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 2.062803166s
Jul  6 19:32:38.613: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 4.094361004s
Jul  6 19:32:40.644: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 6.125141277s
Jul  6 19:32:42.677: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 8.158286018s
Jul  6 19:32:44.708: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 10.189492034s
... skipping 19 lines ...
Jul  6 19:33:25.340: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 50.821041315s
Jul  6 19:33:27.371: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 52.852067017s
Jul  6 19:33:29.403: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 54.884069401s
Jul  6 19:33:31.434: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 56.915949948s
Jul  6 19:33:33.466: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 58.947176019s
Jul  6 19:33:35.497: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 1m0.978309541s
Jul  6 19:33:37.528: INFO: Pod "oidc-discovery-validator": Phase="Failed", Reason="", readiness=false. Elapsed: 1m3.009856143s
Jul  6 19:34:07.532: INFO: polling logs
Jul  6 19:34:07.576: INFO: Pod logs: 
2021/07/06 19:32:35 OK: Got token
2021/07/06 19:32:35 validating with in-cluster discovery
2021/07/06 19:32:35 OK: got issuer https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery
2021/07/06 19:32:35 Full, not-validated claims: 
openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery", Subject:"system:serviceaccount:svcaccounts-9992:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1625600554, NotBefore:1625599954, IssuedAt:1625599954, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-9992", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"297b2dff-6948-4c85-bd71-48ccd521197d"}}}
2021/07/06 19:33:05 failed to validate with in-cluster discovery: Get "https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery/.well-known/openid-configuration": dial tcp: i/o timeout
2021/07/06 19:33:05 falling back to validating with external discovery
2021/07/06 19:33:05 OK: got issuer https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery
2021/07/06 19:33:05 Full, not-validated claims: 
openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery", Subject:"system:serviceaccount:svcaccounts-9992:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1625600554, NotBefore:1625599954, IssuedAt:1625599954, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-9992", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"297b2dff-6948-4c85-bd71-48ccd521197d"}}}
2021/07/06 19:33:35 Get "https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery/.well-known/openid-configuration": dial tcp: i/o timeout

Jul  6 19:34:07.576: FAIL: Unexpected error:
    <*errors.errorString | 0xc001f97b70>: {
        s: "pod \"oidc-discovery-validator\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:33:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:33:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.61.17 PodIP:100.96.4.107 PodIPs:[{IP:100.96.4.107}] StartTime:2021-07-06 19:32:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-07-06 19:32:35 +0000 UTC,FinishedAt:2021-07-06 19:33:35 +0000 UTC,ContainerID:containerd://364afe56d8540581fdb649cb0d4fba23390058f16ab610464f22f6a53b2a9a25,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://364afe56d8540581fdb649cb0d4fba23390058f16ab610464f22f6a53b2a9a25 Started:0xc003e19fb5}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    pod "oidc-discovery-validator" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:33:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:33:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.61.17 PodIP:100.96.4.107 PodIPs:[{IP:100.96.4.107}] StartTime:2021-07-06 19:32:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-07-06 19:32:35 +0000 UTC,FinishedAt:2021-07-06 19:33:35 +0000 UTC,ContainerID:containerd://364afe56d8540581fdb649cb0d4fba23390058f16ab610464f22f6a53b2a9a25,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://364afe56d8540581fdb649cb0d4fba23390058f16ab610464f22f6a53b2a9a25 Started:0xc003e19fb5}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/auth.glob..func6.7()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:789 +0xc45
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000839380)
... skipping 10 lines ...
STEP: Found 4 events.
Jul  6 19:34:07.639: INFO: At 2021-07-06 19:32:34 +0000 UTC - event for oidc-discovery-validator: {default-scheduler } Scheduled: Successfully assigned svcaccounts-9992/oidc-discovery-validator to ip-172-20-61-17.ca-central-1.compute.internal
Jul  6 19:34:07.639: INFO: At 2021-07-06 19:32:35 +0000 UTC - event for oidc-discovery-validator: {kubelet ip-172-20-61-17.ca-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul  6 19:34:07.639: INFO: At 2021-07-06 19:32:35 +0000 UTC - event for oidc-discovery-validator: {kubelet ip-172-20-61-17.ca-central-1.compute.internal} Created: Created container oidc-discovery-validator
Jul  6 19:34:07.639: INFO: At 2021-07-06 19:32:35 +0000 UTC - event for oidc-discovery-validator: {kubelet ip-172-20-61-17.ca-central-1.compute.internal} Started: Started container oidc-discovery-validator
Jul  6 19:34:07.669: INFO: POD                       NODE                                           PHASE   GRACE  CONDITIONS
Jul  6 19:34:07.669: INFO: oidc-discovery-validator  ip-172-20-61-17.ca-central-1.compute.internal  Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:32:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:33:36 +0000 UTC ContainersNotReady containers with unready status: [oidc-discovery-validator]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:33:36 +0000 UTC ContainersNotReady containers with unready status: [oidc-discovery-validator]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:32:34 +0000 UTC  }]
Jul  6 19:34:07.669: INFO: 
Jul  6 19:34:07.701: INFO: 
Logging node info for node ip-172-20-44-51.ca-central-1.compute.internal
Jul  6 19:34:07.731: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-44-51.ca-central-1.compute.internal    63325104-67d6-4441-97d8-03ae4b4af776 44525 0 2021-07-06 19:02:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ca-central-1 failure-domain.beta.kubernetes.io/zone:ca-central-1a kops.k8s.io/instancegroup:master-ca-central-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-44-51.ca-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ca-central-1a topology.kubernetes.io/region:ca-central-1 topology.kubernetes.io/zone:ca-central-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0199654295adf8c6d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{protokube Update v1 2021-07-06 19:02:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2021-07-06 19:02:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {kubelet Update v1 2021-07-06 19:02:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} } {aws-cloud-controller-manager Update v1 2021-07-06 19:02:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2021-07-06 19:02:49 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-06 19:02:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}} } {kubelet Update v1 2021-07-06 19:02:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ca-central-1a/i-0199654295adf8c6d,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3875147776 0} {<nil>} 3784324Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3770290176 0} {<nil>} 3681924Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-06 19:32:54 +0000 UTC,LastTransitionTime:2021-07-06 19:02:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-06 19:32:54 +0000 UTC,LastTransitionTime:2021-07-06 19:02:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-06 19:32:54 +0000 UTC,LastTransitionTime:2021-07-06 19:02:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-06 19:32:54 +0000 UTC,LastTransitionTime:2021-07-06 19:02:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.44.51,},NodeAddress{Type:ExternalIP,Address:35.182.118.89,},NodeAddress{Type:InternalDNS,Address:ip-172-20-44-51.ca-central-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-44-51.ca-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-35-182-118-89.ca-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223875553ea40f3fbc9895c36a2bec,SystemUUID:ec223875-553e-a40f-3fbc-9895c36a2bec,BootID:e41c565f-38ae-46ea-a8db-f5a02c244dc2,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430],SizeBytes:171082409,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64@sha256:125a9c5805e1327c0ff2ebf23c71fd9fe2a68203ff118a162e2d04737999db58 k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.22.0-beta.0],SizeBytes:127900125,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.22.0-beta.0],SizeBytes:122248003,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.1],SizeBytes:113890838,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.1],SizeBytes:112365079,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.22.0-beta.0],SizeBytes:53004600,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.1],SizeBytes:25632279,},ContainerImage{Names:[gcr.io/k8s-staging-provider-aws/cloud-controller-manager@sha256:6e0084ecedc8d6d2b0f5cb984c4fe6c860c8d7283c173145b0eaeaaff35ba98a gcr.io/k8s-staging-provider-aws/cloud-controller-manager:latest],SizeBytes:16211866,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul  6 19:34:07.732: INFO: 
Logging kubelet events for node ip-172-20-44-51.ca-central-1.compute.internal
... skipping 136 lines ...
Jul  6 19:34:08.680: INFO: kube-proxy-ip-172-20-61-17.ca-central-1.compute.internal started at 2021-07-06 19:03:30 +0000 UTC (0+1 container statuses recorded)
Jul  6 19:34:08.680: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  6 19:34:08.680: INFO: verify-service-up-exec-pod-whcnc started at 2021-07-06 19:28:56 +0000 UTC (0+1 container statuses recorded)
Jul  6 19:34:08.680: INFO: 	Container agnhost-container ready: true, restart count 0
Jul  6 19:34:08.680: INFO: concurrent-27093334--1-qw5gh started at 2021-07-06 19:34:00 +0000 UTC (0+1 container statuses recorded)
Jul  6 19:34:08.680: INFO: 	Container c ready: true, restart count 0
Jul  6 19:34:08.680: INFO: failed-jobs-history-limit-27093334--1-mpbv8 started at 2021-07-06 19:34:00 +0000 UTC (0+1 container statuses recorded)
Jul  6 19:34:08.680: INFO: 	Container c ready: false, restart count 1
Jul  6 19:34:08.680: INFO: ebs-csi-node-g8std started at 2021-07-06 19:04:01 +0000 UTC (0+3 container statuses recorded)
Jul  6 19:34:08.680: INFO: 	Container ebs-plugin ready: true, restart count 0
Jul  6 19:34:08.680: INFO: 	Container liveness-probe ready: true, restart count 0
Jul  6 19:34:08.680: INFO: 	Container node-driver-registrar ready: true, restart count 0
Jul  6 19:34:08.680: INFO: up-down-1-mt9xq started at 2021-07-06 19:28:48 +0000 UTC (0+1 container statuses recorded)
... skipping 55 lines ...
• Failure [94.982 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:34:07.576: Unexpected error:
      <*errors.errorString | 0xc001f97b70>: {
          s: "pod \"oidc-discovery-validator\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:33:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:33:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.61.17 PodIP:100.96.4.107 PodIPs:[{IP:100.96.4.107}] StartTime:2021-07-06 19:32:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-07-06 19:32:35 +0000 UTC,FinishedAt:2021-07-06 19:33:35 +0000 UTC,ContainerID:containerd://364afe56d8540581fdb649cb0d4fba23390058f16ab610464f22f6a53b2a9a25,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://364afe56d8540581fdb649cb0d4fba23390058f16ab610464f22f6a53b2a9a25 Started:0xc003e19fb5}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
      }
      pod "oidc-discovery-validator" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:33:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:33:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.61.17 PodIP:100.96.4.107 PodIPs:[{IP:100.96.4.107}] StartTime:2021-07-06 19:32:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-07-06 19:32:35 +0000 UTC,FinishedAt:2021-07-06 19:33:35 +0000 UTC,ContainerID:containerd://364afe56d8540581fdb649cb0d4fba23390058f16ab610464f22f6a53b2a9a25,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://364afe56d8540581fdb649cb0d4fba23390058f16ab610464f22f6a53b2a9a25 Started:0xc003e19fb5}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:789
------------------------------
{"msg":"FAILED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":22,"skipped":233,"failed":3,"failures":["[sig-network] DNS should provide DNS for services  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}
Jul  6 19:34:09.288: INFO: Running AfterSuite actions on all nodes
Jul  6 19:34:09.288: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:34:09.288: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:34:09.288: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:34:09.288: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:34:09.288: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":30,"skipped":215,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
Jul  6 19:34:11.229: INFO: Running AfterSuite actions on all nodes
Jul  6 19:34:11.229: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:34:11.229: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:34:11.229: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:34:11.230: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:34:11.230: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 41 lines ...
Jul  6 19:32:37.237: INFO: Running '/tmp/kubectl3219863616/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1218 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.71.110.245:80 2>&1 || true; echo; done'
Jul  6 19:34:11.863: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n+ wget -q -T 1 -O - http://100.71.110.245:80\n+ echo\n"
Jul  6 19:34:11.863: INFO: stdout: "up-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-mt9xq\nwget: download timed out\n\nup-down-1-mt9xq\nup-down-1-mt9xq\n"
Jul  6 19:34:11.863: INFO: Unable to reach the following endpoints of service 100.71.110.245: map[up-down-1-49smm:{} up-down-1-bvqz6:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1218
STEP: Deleting pod verify-service-up-exec-pod-whcnc in namespace services-1218
Jul  6 19:34:16.936: FAIL: Unexpected error:
    <*errors.errorString | 0xc0042120c0>: {
        s: "service verification failed for: 100.71.110.245\nexpected [up-down-1-49smm up-down-1-bvqz6 up-down-1-mt9xq]\nreceived [up-down-1-mt9xq wget: download timed out]",
    }
    service verification failed for: 100.71.110.245
    expected [up-down-1-49smm up-down-1-bvqz6 up-down-1-mt9xq]
    received [up-down-1-mt9xq wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.8()
... skipping 262 lines ...
• Failure [330.355 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1033

  Jul  6 19:34:16.936: Unexpected error:
      <*errors.errorString | 0xc0042120c0>: {
          s: "service verification failed for: 100.71.110.245\nexpected [up-down-1-49smm up-down-1-bvqz6 up-down-1-mt9xq]\nreceived [up-down-1-mt9xq wget: download timed out]",
      }
      service verification failed for: 100.71.110.245
      expected [up-down-1-49smm up-down-1-bvqz6 up-down-1-mt9xq]
      received [up-down-1-mt9xq wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1049
------------------------------
{"msg":"FAILED [sig-network] Services should be able to up and down services","total":-1,"completed":31,"skipped":194,"failed":4,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should be able to up and down services"]}
Jul  6 19:34:18.694: INFO: Running AfterSuite actions on all nodes
Jul  6 19:34:18.694: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:34:18.694: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:34:18.694: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:34:18.694: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:34:18.694: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 131 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":32,"skipped":234,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}
Jul  6 19:34:47.544: INFO: Running AfterSuite actions on all nodes
Jul  6 19:34:47.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:34:47.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:34:47.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:34:47.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:34:47.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  6 19:34:47.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  6 19:34:47.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":214,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:33:20.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-2496" for this suite.


• [SLOW TEST:106.458 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":25,"skipped":214,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}
Jul  6 19:35:06.628: INFO: Running AfterSuite actions on all nodes
Jul  6 19:35:06.628: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:35:06.628: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:35:06.628: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:35:06.628: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:35:06.628: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 24 lines ...
Jul  6 19:32:23.495: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8395.svc.cluster.local from pod dns-8395/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc: the server is currently unable to handle the request (get pods dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc)
Jul  6 19:32:53.526: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8395/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc: the server is currently unable to handle the request (get pods dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc)
Jul  6 19:33:23.559: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8395/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc: the server is currently unable to handle the request (get pods dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc)
Jul  6 19:33:53.591: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8395.svc.cluster.local from pod dns-8395/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc: the server is currently unable to handle the request (get pods dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc)
Jul  6 19:34:23.623: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8395.svc.cluster.local from pod dns-8395/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc: the server is currently unable to handle the request (get pods dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc)
Jul  6 19:34:53.656: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8395.svc.cluster.local from pod dns-8395/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc: the server is currently unable to handle the request (get pods dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc)
Jul  6 19:35:23.370: FAIL: Unable to read jessie_tcp@dns-test-service-2.dns-8395.svc.cluster.local from pod dns-8395/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-8395/pods/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc/proxy/results/jessie_tcp@dns-test-service-2.dns-8395.svc.cluster.local": context deadline exceeded

Full Stack Trace
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7fe220b37108, 0x18, 0xc003298e28)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc00005e058, 0xc0040453b0, 0x29e9900, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b
testing.tRunner(0xc000583b00, 0x71cf618)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
E0706 19:35:23.370718   12558 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jul  6 19:35:23.370: Unable to read jessie_tcp@dns-test-service-2.dns-8395.svc.cluster.local from pod dns-8395/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc: Get \"https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-8395/pods/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc/proxy/results/jessie_tcp@dns-test-service-2.dns-8395.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7fe220b37108, 0x18, 0xc003298e28)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc00005e058, 0xc0040453b0, 0x29e9900, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x780f3c8, 0xc00005e058, 0xc003298e01, 0xc003298e28, 0xc0040453b0, 0x67ba9a0, 0xc0040453b0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x780f3c8, 0xc00005e058, 0x12a05f200, 0x8bb2c97000, 0xc0040453b0, 0x6cf83e0, 0x24f8401)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc002f9e150, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003a9c700, 0xc, 0x10, 0x6fb5f5e, 0x7, 0xc000181800, 0x78a18a8, 0xc003abcc60, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00123e420, 0xc000181800, 0xc003a9c700, 0xc, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.8()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:322 +0xb2f\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000583b00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000583b00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b\ntesting.tRunner(0xc000583b00, 0x71cf618)\n\t/usr/local/go/src/testing/testing.go:1193 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1238 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6b4ac20, 0xc0005fc100)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x6b4ac20, 0xc0005fc100)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000579040, 0x189, 0x87cadfb, 0x7d, 0xd9, 0xc000523800, 0xa8a)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x628e540, 0x76c5570)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc000579040, 0x189, 0xc001765648, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000579040, 0x189, 0xc001765730, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x7059d05, 0x24, 0xc001765990, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7fe220b37108, 0x18, 0xc003298e28)
... skipping 245 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:35:23.370: Unable to read jessie_tcp@dns-test-service-2.dns-8395.svc.cluster.local from pod dns-8395/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-8395/pods/dns-test-efa4ea1b-8a99-4d5e-bf93-5855bcc0dbcc/proxy/results/jessie_tcp@dns-test-service-2.dns-8395.svc.cluster.local": context deadline exceeded

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":13,"skipped":160,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-network] DNS should support configurable pod resolv.conf","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
Jul  6 19:35:25.148: INFO: Running AfterSuite actions on all nodes
Jul  6 19:35:25.148: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:35:25.148: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:35:25.148: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:35:25.148: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:35:25.148: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  6 19:32:55.497: INFO: Creating ReplicaSet my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70
Jul  6 19:32:55.560: INFO: Pod name my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70: Found 1 pods out of 1
Jul  6 19:32:55.560: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70" is running
Jul  6 19:32:57.624: INFO: Pod "my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70-x96jg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 19:32:55 +0000 UTC Reason: Message:}])
Jul  6 19:32:57.624: INFO: Trying to dial the pod
Jul  6 19:33:32.719: INFO: Controller my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70: Failed to GET from replica 1 [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70-x96jg]: the server is currently unable to handle the request (get pods my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70-x96jg)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.61.241", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00388b2f0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0038b5f00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00087157d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 19:34:07.719: INFO: Controller my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70: Failed to GET from replica 1 [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70-x96jg]: the server is currently unable to handle the request (get pods my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70-x96jg)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.61.241", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00388b2f0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0038b5f00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00087157d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 19:34:42.719: INFO: Controller my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70: Failed to GET from replica 1 [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70-x96jg]: the server is currently unable to handle the request (get pods my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70-x96jg)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.61.241", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00388b2f0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0038b5f00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00087157d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 19:35:17.718: INFO: Controller my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70: Failed to GET from replica 1 [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70-x96jg]: the server is currently unable to handle the request (get pods my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70-x96jg)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.61.241", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00388b2f0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0038b5f00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00087157d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 19:35:47.812: INFO: Controller my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70: Failed to GET from replica 1 [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70-x96jg]: the server is currently unable to handle the request (get pods my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70-x96jg)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196775, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.61.241", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00388b2f0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-c76111a6-09a3-49f6-a4ff-4f830d73eb70", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0038b5f00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00087157d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 19:35:47.813: FAIL: Did not get expected responses within the timeout period of 120.00 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func8.1()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:110 +0x57
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000be2c00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 195 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:35:47.813: Did not get expected responses within the timeout period of 120.00 seconds.

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:110
------------------------------
{"msg":"FAILED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":7,"skipped":72,"failed":4,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}
Jul  6 19:35:49.442: INFO: Running AfterSuite actions on all nodes
Jul  6 19:35:49.442: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:35:49.442: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:35:49.442: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:35:49.442: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:35:49.442: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  6 19:35:49.442: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  6 19:35:49.442: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":48,"failed":0}
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:08:30.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 86 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":5,"skipped":48,"failed":0}
Jul  6 19:36:36.111: INFO: Running AfterSuite actions on all nodes
Jul  6 19:36:36.112: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:36:36.112: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:36:36.112: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:36:36.112: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:36:36.112: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 15 lines ...
Jul  6 19:26:30.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:3, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196390, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196390, loc:(*time.Location)(0x9f895a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"test-rolling-update-with-lb-864fb64577\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196390, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196390, loc:(*time.Location)(0x9f895a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Jul  6 19:26:32.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196390, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196390, loc:(*time.Location)(0x9f895a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196392, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196390, loc:(*time.Location)(0x9f895a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-with-lb-864fb64577\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  6 19:26:34.883: INFO: Creating a service test-rolling-update-with-lb with type=LoadBalancer and externalTrafficPolicy=Local in namespace deployment-7385
STEP: creating a service deployment-7385/test-rolling-update-with-lb with type=LoadBalancer
STEP: waiting for loadbalancer for service deployment-7385/test-rolling-update-with-lb
Jul  6 19:26:34.953: INFO: Waiting up to 10m0s for service "test-rolling-update-with-lb" to have a LoadBalancer
Jul  6 19:36:35.048: FAIL: Unexpected error:
    <*errors.errorString | 0xc003be2090>: {
        s: "timed out waiting for service \"test-rolling-update-with-lb\" to have a load balancer",
    }
    timed out waiting for service "test-rolling-update-with-lb" to have a load balancer
occurred

... skipping 41 lines ...
Jul  6 19:36:35.174: INFO: At 2021-07-06 19:26:31 +0000 UTC - event for test-rolling-update-with-lb-864fb64577-plsn6: {kubelet ip-172-20-51-240.ca-central-1.compute.internal} Started: Started container agnhost
Jul  6 19:36:35.174: INFO: At 2021-07-06 19:26:32 +0000 UTC - event for test-rolling-update-with-lb-864fb64577-2kbpr: {kubelet ip-172-20-61-241.ca-central-1.compute.internal} Started: Started container agnhost
Jul  6 19:36:35.174: INFO: At 2021-07-06 19:26:32 +0000 UTC - event for test-rolling-update-with-lb-864fb64577-8swz4: {kubelet ip-172-20-56-177.ca-central-1.compute.internal} Created: Created container agnhost
Jul  6 19:36:35.174: INFO: At 2021-07-06 19:26:32 +0000 UTC - event for test-rolling-update-with-lb-864fb64577-8swz4: {kubelet ip-172-20-56-177.ca-central-1.compute.internal} Started: Started container agnhost
Jul  6 19:36:35.174: INFO: At 2021-07-06 19:26:32 +0000 UTC - event for test-rolling-update-with-lb-864fb64577-8swz4: {kubelet ip-172-20-56-177.ca-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul  6 19:36:35.174: INFO: At 2021-07-06 19:26:34 +0000 UTC - event for test-rolling-update-with-lb: {service-controller } EnsuringLoadBalancer: Ensuring load balancer
Jul  6 19:36:35.174: INFO: At 2021-07-06 19:26:35 +0000 UTC - event for test-rolling-update-with-lb: {service-controller } SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: error describing subnets: "error listing AWS subnets: \"UnauthorizedOperation: You are not authorized to perform this operation.\\n\\tstatus code: 403, request id: a6133d2f-e888-4827-af60-35d7b8d4325d\""
Jul  6 19:36:35.174: INFO: At 2021-07-06 19:26:40 +0000 UTC - event for test-rolling-update-with-lb: {service-controller } SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: error describing subnets: "error listing AWS subnets: \"UnauthorizedOperation: You are not authorized to perform this operation.\\n\\tstatus code: 403, request id: 95cabd03-6f98-4c0d-a0f0-7bc27bd9921e\""
Jul  6 19:36:35.174: INFO: At 2021-07-06 19:26:50 +0000 UTC - event for test-rolling-update-with-lb: {service-controller } SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: error describing subnets: "error listing AWS subnets: \"UnauthorizedOperation: You are not authorized to perform this operation.\\n\\tstatus code: 403, request id: 2e4266c8-735d-431a-b725-ffaaa082a8a6\""
Jul  6 19:36:35.174: INFO: At 2021-07-06 19:27:10 +0000 UTC - event for test-rolling-update-with-lb: {service-controller } SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: error describing subnets: "error listing AWS subnets: \"UnauthorizedOperation: You are not authorized to perform this operation.\\n\\tstatus code: 403, request id: 5837dcf4-f646-454f-b237-0c59770bb74b\""
Jul  6 19:36:35.175: INFO: At 2021-07-06 19:27:50 +0000 UTC - event for test-rolling-update-with-lb: {service-controller } SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: error describing subnets: "error listing AWS subnets: \"UnauthorizedOperation: You are not authorized to perform this operation.\\n\\tstatus code: 403, request id: 1af47f61-03ee-4923-9cd8-7b1e54f2deb1\""
Jul  6 19:36:35.175: INFO: At 2021-07-06 19:29:10 +0000 UTC - event for test-rolling-update-with-lb: {service-controller } SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: error describing subnets: "error listing AWS subnets: \"UnauthorizedOperation: You are not authorized to perform this operation.\\n\\tstatus code: 403, request id: bb4b8d4b-3252-42d6-95f7-edd679c4e853\""
Jul  6 19:36:35.175: INFO: At 2021-07-06 19:31:50 +0000 UTC - event for test-rolling-update-with-lb: {service-controller } SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: error describing subnets: "error listing AWS subnets: \"UnauthorizedOperation: You are not authorized to perform this operation.\\n\\tstatus code: 403, request id: 28ba6423-450f-4cf9-a4b8-0d58eca817ed\""
Jul  6 19:36:35.206: INFO: POD                                           NODE                                            PHASE    GRACE  CONDITIONS
Jul  6 19:36:35.206: INFO: test-rolling-update-with-lb-864fb64577-2kbpr  ip-172-20-61-241.ca-central-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:30 +0000 UTC  }]
Jul  6 19:36:35.206: INFO: test-rolling-update-with-lb-864fb64577-8swz4  ip-172-20-56-177.ca-central-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:30 +0000 UTC  }]
Jul  6 19:36:35.206: INFO: test-rolling-update-with-lb-864fb64577-plsn6  ip-172-20-51-240.ca-central-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 19:26:30 +0000 UTC  }]
Jul  6 19:36:35.206: INFO: 
Jul  6 19:36:35.238: INFO: 
... skipping 170 lines ...
• Failure [606.287 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not disrupt a cloud load-balancer's connectivity during rollout [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:161

  Jul  6 19:36:35.049: Unexpected error:
      <*errors.errorString | 0xc003be2090>: {
          s: "timed out waiting for service \"test-rolling-update-with-lb\" to have a load balancer",
      }
      timed out waiting for service "test-rolling-update-with-lb" to have a load balancer
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:1394
------------------------------
{"msg":"FAILED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":17,"skipped":101,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout"]}
Jul  6 19:36:36.895: INFO: Running AfterSuite actions on all nodes
Jul  6 19:36:36.895: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:36:36.895: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:36:36.895: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:36:36.895: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:36:36.895: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  6 19:36:36.895: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  6 19:36:36.896: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":13,"skipped":98,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 19:19:12.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 273 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  31s   default-scheduler  Successfully assigned pod-network-test-8241/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     30s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    30s   kubelet            Created container webserver
  Normal  Started    30s   kubelet            Started container webserver

Jul  6 19:19:43.665: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.2.172&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Jul  6 19:19:43.665: INFO: ...failed...will try again in next pass
Jul  6 19:19:43.665: INFO: Breadth first check of 100.96.4.195 on host 172.20.61.17...
Jul  6 19:19:43.697: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.4.195&port=8081&tries=1'] Namespace:pod-network-test-8241 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:19:43.697: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:19:48.979: INFO: Waiting for responses: map[netserver-2:{}]
Jul  6 19:19:50.979: INFO: 
Output of kubectl describe pod pod-network-test-8241/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  39s   default-scheduler  Successfully assigned pod-network-test-8241/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     38s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    38s   kubelet            Created container webserver
  Normal  Started    38s   kubelet            Started container webserver

Jul  6 19:19:51.931: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.4.195&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Jul  6 19:19:51.931: INFO: ...failed...will try again in next pass
Jul  6 19:19:51.931: INFO: Breadth first check of 100.96.1.131 on host 172.20.61.241...
Jul  6 19:19:51.962: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.1.131&port=8081&tries=1'] Namespace:pod-network-test-8241 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:19:51.962: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:19:57.261: INFO: Waiting for responses: map[netserver-3:{}]
Jul  6 19:19:59.262: INFO: 
Output of kubectl describe pod pod-network-test-8241/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  48s   default-scheduler  Successfully assigned pod-network-test-8241/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     47s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    47s   kubelet            Created container webserver
  Normal  Started    47s   kubelet            Started container webserver

Jul  6 19:20:00.256: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.1.131&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}])
Jul  6 19:20:00.256: INFO: ...failed...will try again in next pass
Jul  6 19:20:00.256: INFO: Going to retry 3 out of 4 pods....
Jul  6 19:20:00.256: INFO: Doublechecking 1 pods in host 172.20.56.177 which weren't seen the first time.
Jul  6 19:20:00.256: INFO: Now attempting to probe pod [[[ 100.96.2.172 ]]]
Jul  6 19:20:00.288: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.2.172&port=8081&tries=1'] Namespace:pod-network-test-8241 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:20:00.288: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:20:05.553: INFO: Waiting for responses: map[netserver-1:{}]
... skipping 377 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m25s  default-scheduler  Successfully assigned pod-network-test-8241/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     6m24s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    6m24s  kubelet            Created container webserver
  Normal  Started    6m24s  kubelet            Started container webserver

Jul  6 19:25:37.712: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.2.172&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Jul  6 19:25:37.712: INFO: ... Done probing pod [[[ 100.96.2.172 ]]]
Jul  6 19:25:37.712: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  12m   default-scheduler  Successfully assigned pod-network-test-8241/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     12m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    12m   kubelet            Created container webserver
  Normal  Started    12m   kubelet            Started container webserver

Jul  6 19:31:15.187: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.4.195&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Jul  6 19:31:15.187: INFO: ... Done probing pod [[[ 100.96.4.195 ]]]
Jul  6 19:31:15.187: INFO: succeeded at polling 2 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  17m   default-scheduler  Successfully assigned pod-network-test-8241/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     17m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    17m   kubelet            Created container webserver
  Normal  Started    17m   kubelet            Started container webserver

Jul  6 19:36:53.132: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.1.131&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}])
Jul  6 19:36:53.132: INFO: ... Done probing pod [[[ 100.96.1.131 ]]]
Jul  6 19:36:53.132: INFO: succeeded at polling 1 out of 4 connections
Jul  6 19:36:53.132: INFO: pod polling failure summary:
Jul  6 19:36:53.132: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.2.172&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}]
Jul  6 19:36:53.132: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.4.195&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Jul  6 19:36:53.132: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.174:9080/dial?request=hostname&protocol=udp&host=100.96.1.131&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}]
Jul  6 19:36:53.133: FAIL: failed,  3 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c2a480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 202 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul  6 19:36:53.133: failed,  3 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":98,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}
Jul  6 19:36:54.864: INFO: Running AfterSuite actions on all nodes
Jul  6 19:36:54.864: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:36:54.864: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:36:54.864: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:36:54.864: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:36:54.864: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 23 lines ...
Jul  6 19:18:14.395: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:18:44.427: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:19:14.458: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:19:44.489: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:20:14.520: INFO: Unable to read jessie_udp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:20:44.552: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:20:44.553: INFO: Lookups using dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:21:19.585: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:21:49.617: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:22:19.649: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:22:49.680: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:23:19.713: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:23:49.745: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:24:19.777: INFO: Unable to read jessie_udp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:24:49.809: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:24:49.809: INFO: Lookups using dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:25:24.585: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:25:54.619: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:26:24.651: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:26:54.683: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:27:24.717: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:27:54.749: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:28:24.781: INFO: Unable to read jessie_udp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:28:54.813: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:28:54.813: INFO: Lookups using dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:29:29.585: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:29:59.617: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:30:29.648: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:30:59.679: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:31:29.711: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:31:59.743: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:32:29.774: INFO: Unable to read jessie_udp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:32:59.805: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:32:59.805: INFO: Lookups using dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:33:29.837: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:33:59.869: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:34:29.900: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:34:59.932: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:35:29.963: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:35:59.998: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:36:30.029: INFO: Unable to read jessie_udp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:37:00.062: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f: the server is currently unable to handle the request (get pods dns-test-60098420-8c82-4a86-885c-efc362a8a26f)
Jul  6 19:37:00.062: INFO: Lookups using dns-6993/dns-test-60098420-8c82-4a86-885c-efc362a8a26f failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-6993.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:37:00.062: FAIL: Unexpected error:
    <*errors.errorString | 0xc000238240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 196 lines ...
• Failure [1225.971 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:37:00.062: Unexpected error:
      <*errors.errorString | 0xc000238240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":64,"failed":3,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}
Jul  6 19:37:01.968: INFO: Running AfterSuite actions on all nodes
Jul  6 19:37:01.968: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:37:01.968: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:37:01.968: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:37:01.968: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:37:01.968: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 25 lines ...
• [SLOW TEST:244.167 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted by liveness probe because startup probe delays it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:348
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":46,"skipped":347,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
Jul  6 19:37:15.654: INFO: Running AfterSuite actions on all nodes
Jul  6 19:37:15.655: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:37:15.655: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:37:15.655: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:37:15.655: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:37:15.655: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 278 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  31s   default-scheduler  Successfully assigned pod-network-test-5151/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     30s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    30s   kubelet            Created container webserver
  Normal  Started    30s   kubelet            Started container webserver

Jul  6 19:21:36.657: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.3.185&port=8083&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Jul  6 19:21:36.657: INFO: ...failed...will try again in next pass
Jul  6 19:21:36.657: INFO: Breadth first check of 100.96.2.189 on host 172.20.56.177...
Jul  6 19:21:36.689: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.2.189&port=8083&tries=1'] Namespace:pod-network-test-5151 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:21:36.689: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:21:41.968: INFO: Waiting for responses: map[netserver-1:{}]
Jul  6 19:21:43.969: INFO: 
Output of kubectl describe pod pod-network-test-5151/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  39s   default-scheduler  Successfully assigned pod-network-test-5151/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     38s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    38s   kubelet            Created container webserver
  Normal  Started    38s   kubelet            Started container webserver

Jul  6 19:21:44.894: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.2.189&port=8083&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Jul  6 19:21:44.894: INFO: ...failed...will try again in next pass
Jul  6 19:21:44.894: INFO: Breadth first check of 100.96.4.215 on host 172.20.61.17...
Jul  6 19:21:44.926: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.4.215&port=8083&tries=1'] Namespace:pod-network-test-5151 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:21:44.926: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:21:50.201: INFO: Waiting for responses: map[netserver-2:{}]
Jul  6 19:21:52.201: INFO: 
Output of kubectl describe pod pod-network-test-5151/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  48s   default-scheduler  Successfully assigned pod-network-test-5151/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     47s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    47s   kubelet            Created container webserver
  Normal  Started    47s   kubelet            Started container webserver

Jul  6 19:21:53.106: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.4.215&port=8083&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Jul  6 19:21:53.106: INFO: ...failed...will try again in next pass
Jul  6 19:21:53.106: INFO: Breadth first check of 100.96.1.154 on host 172.20.61.241...
Jul  6 19:21:53.138: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.1.154&port=8083&tries=1'] Namespace:pod-network-test-5151 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 19:21:53.138: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 19:21:53.437: INFO: Waiting for responses: map[]
Jul  6 19:21:53.437: INFO: reached 100.96.1.154 after 0/1 tries
Jul  6 19:21:53.437: INFO: Going to retry 3 out of 4 pods....
... skipping 382 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m26s  default-scheduler  Successfully assigned pod-network-test-5151/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     6m25s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    6m25s  kubelet            Created container webserver
  Normal  Started    6m25s  kubelet            Started container webserver

Jul  6 19:27:31.603: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.3.185&port=8083&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Jul  6 19:27:31.603: INFO: ... Done probing pod [[[ 100.96.3.185 ]]]
Jul  6 19:27:31.603: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  12m   default-scheduler  Successfully assigned pod-network-test-5151/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     12m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    12m   kubelet            Created container webserver
  Normal  Started    12m   kubelet            Started container webserver

Jul  6 19:33:09.706: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.2.189&port=8083&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Jul  6 19:33:09.707: INFO: ... Done probing pod [[[ 100.96.2.189 ]]]
Jul  6 19:33:09.707: INFO: succeeded at polling 2 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  17m   default-scheduler  Successfully assigned pod-network-test-5151/netserver-3 to ip-172-20-61-241.ca-central-1.compute.internal
  Normal  Pulled     17m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    17m   kubelet            Created container webserver
  Normal  Started    17m   kubelet            Started container webserver

Jul  6 19:38:48.195: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.4.215&port=8083&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Jul  6 19:38:48.195: INFO: ... Done probing pod [[[ 100.96.4.215 ]]]
Jul  6 19:38:48.195: INFO: succeeded at polling 1 out of 4 connections
Jul  6 19:38:48.195: INFO: pod polling failure summary:
Jul  6 19:38:48.195: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.3.185&port=8083&tries=1'
retrieved map[]
expected map[netserver-0:{}]
Jul  6 19:38:48.195: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.2.189&port=8083&tries=1'
retrieved map[]
expected map[netserver-1:{}]
Jul  6 19:38:48.195: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.159:9080/dial?request=hostname&protocol=http&host=100.96.4.215&port=8083&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Jul  6 19:38:48.195: FAIL: failed,  3 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000483200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 186 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul  6 19:38:48.195: failed,  3 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":146,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}
Jul  6 19:38:49.945: INFO: Running AfterSuite actions on all nodes
Jul  6 19:38:49.945: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:38:49.945: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:38:49.945: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:38:49.945: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:38:49.945: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 22 lines ...
Jul  6 19:25:04.804: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:25:34.838: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:26:04.870: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:26:34.904: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:27:04.936: INFO: Unable to read jessie_udp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:27:34.968: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:27:34.969: INFO: Lookups using dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:28:10.003: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:28:40.038: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:29:10.072: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:29:40.105: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:30:10.138: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:30:40.171: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:31:10.211: INFO: Unable to read jessie_udp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:31:40.244: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:31:40.244: INFO: Lookups using dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:32:15.001: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:32:45.035: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:33:15.072: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:33:45.106: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:34:15.139: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:34:45.172: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:35:15.204: INFO: Unable to read jessie_udp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:35:45.237: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:35:45.237: INFO: Lookups using dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:36:20.004: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:36:50.037: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:37:20.069: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:37:50.101: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:38:20.135: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:38:50.167: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:39:20.199: INFO: Unable to read jessie_udp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:39:50.231: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:39:50.232: INFO: Lookups using dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:40:20.265: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:40:50.334: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:41:20.367: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:41:50.399: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:42:20.431: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:42:50.470: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:43:20.502: INFO: Unable to read jessie_udp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:43:50.534: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef: the server is currently unable to handle the request (get pods dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef)
Jul  6 19:43:50.534: INFO: Lookups using dns-3371/dns-test-d13ccf47-cbdb-4205-8e0d-106b1a5e67ef failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-3371.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 19:43:50.537: FAIL: Unexpected error:
    <*errors.errorString | 0xc000246250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 170 lines ...
• Failure [1219.954 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 19:43:50.537: Unexpected error:
      <*errors.errorString | 0xc000246250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":33,"skipped":281,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}
Jul  6 19:43:52.380: INFO: Running AfterSuite actions on all nodes
Jul  6 19:43:52.380: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 19:43:52.380: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 19:43:52.380: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 19:43:52.380: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 19:43:52.381: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 14 lines ...
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-5c0db530-4955-4bdb-9602-acc75a16f44a]
STEP: Verifying pods for RC slow-terminating-unready-pod
Jul  6 19:27:30.909: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Jul  6 19:28:03.066: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qklgn]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qklgn)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.51.240", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004208e40), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011173a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00420dbed)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 19:28:35.160: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qklgn]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qklgn)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.51.240", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004208e40), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011173a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00420dbed)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 19:29:07.158: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qklgn]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qklgn)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.51.240", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004208e40), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011173a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00420dbed)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 19:29:39.160: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qklgn]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qklgn)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.51.240", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004208e40), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011173a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00420dbed)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 19:30:11.157: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qklgn]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qklgn)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761196450, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time: