This job view page is being replaced by Spyglass soon. Check out the new job view.
PRolemarkus: Enable IRSA for CCM
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-07-10 07:51
Elapsed1h6m
Revisionf62f46b50a62cfa85c19ee015ccc9f4761c4b0b6
Refs 11818

No Test Failures!


Error lines from build-log.txt

... skipping 486 lines ...
I0710 07:55:38.059507    4229 dumplogs.go:38] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops toolbox dump --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0710 07:55:38.073950   12211 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0710 07:55:38.074079   12211 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0710 07:55:38.074083   12211 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
W0710 07:55:38.564581    4229 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0710 07:55:38.564654    4229 down.go:48] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops delete cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --yes
I0710 07:55:38.577293   12222 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0710 07:55:38.577661   12222 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0710 07:55:38.577666   12222 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
I0710 07:55:39.122888    4229 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/07/10 07:55:39 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0710 07:55:39.132423    4229 http.go:37] curl https://ip.jsb.workers.dev
I0710 07:55:39.240485    4229 up.go:144] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.0-beta.1 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210621 --channel=alpha --networking=kubenet --container-runtime=containerd --override=cluster.spec.cloudControllerManager.cloudProvider=aws --override=cluster.spec.serviceAccountIssuerDiscovery.discoveryStore=s3://k8s-kops-prow/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery --override=cluster.spec.serviceAccountIssuerDiscovery.enableAWSOIDCProvider=true --admin-access 146.148.58.30/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-2a --master-size c5.large
I0710 07:55:39.254186   12233 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0710 07:55:39.254304   12233 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0710 07:55:39.254308   12233 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
I0710 07:55:39.297747   12233 create_cluster.go:828] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 33 lines ...
I0710 07:56:07.542312    4229 up.go:181] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops validate cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0710 07:56:07.557089   12253 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0710 07:56:07.557861   12253 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0710 07:56:07.557870   12253 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
Validating cluster e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io

W0710 07:56:09.030429   12253 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:56:19.066114   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:56:29.131054   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:56:39.168156   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:56:49.203077   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:56:59.238840   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:57:09.272080   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:57:19.307791   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:57:29.339956   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:57:39.374072   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:57:49.427020   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:57:59.491925   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:58:09.527583   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:58:19.606407   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
W0710 07:58:29.630362   12253 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:58:39.662372   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:58:49.708586   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:58:59.746326   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:59:09.788914   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:59:19.822506   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:59:29.858522   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:59:39.894117   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:59:49.924110   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 07:59:59.971902   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 08:00:10.003797   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0710 08:00:20.050754   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 8 lines ...
Machine	i-081318d29ecd58b29				machine "i-081318d29ecd58b29" has not yet joined cluster
Machine	i-088af31c3b0e30700				machine "i-088af31c3b0e30700" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-6f594f4c58-485wp	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-485wp" is pending
Pod	kube-system/coredns-f45c4bf76-jvsvz		system-cluster-critical pod "coredns-f45c4bf76-jvsvz" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-t4r8x	system-cluster-critical pod "ebs-csi-controller-566c97f85c-t4r8x" is pending

Validation Failed
W0710 08:00:34.120839   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 11 lines ...
Pod	kube-system/coredns-autoscaler-6f594f4c58-485wp		system-cluster-critical pod "coredns-autoscaler-6f594f4c58-485wp" is pending
Pod	kube-system/coredns-f45c4bf76-jvsvz			system-cluster-critical pod "coredns-f45c4bf76-jvsvz" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-t4r8x		system-cluster-critical pod "ebs-csi-controller-566c97f85c-t4r8x" is pending
Pod	kube-system/ebs-csi-node-222hr				system-node-critical pod "ebs-csi-node-222hr" is pending
Pod	kube-system/ebs-csi-node-pczvt				system-node-critical pod "ebs-csi-node-pczvt" is pending

Validation Failed
W0710 08:00:46.947391   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 12 lines ...
Pod	kube-system/coredns-autoscaler-6f594f4c58-485wp		system-cluster-critical pod "coredns-autoscaler-6f594f4c58-485wp" is pending
Pod	kube-system/coredns-f45c4bf76-jvsvz			system-cluster-critical pod "coredns-f45c4bf76-jvsvz" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-t4r8x		system-cluster-critical pod "ebs-csi-controller-566c97f85c-t4r8x" is pending
Pod	kube-system/ebs-csi-node-hh8tt				system-node-critical pod "ebs-csi-node-hh8tt" is pending
Pod	kube-system/ebs-csi-node-mbk4r				system-node-critical pod "ebs-csi-node-mbk4r" is pending

Validation Failed
W0710 08:00:59.583843   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 10 lines ...
Pod	kube-system/coredns-f45c4bf76-jvsvz					system-cluster-critical pod "coredns-f45c4bf76-jvsvz" is not ready (coredns)
Pod	kube-system/ebs-csi-controller-566c97f85c-t4r8x				system-cluster-critical pod "ebs-csi-controller-566c97f85c-t4r8x" is pending
Pod	kube-system/kube-proxy-ip-172-20-35-182.ap-northeast-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-35-182.ap-northeast-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-41-208.ap-northeast-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-41-208.ap-northeast-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-49-206.ap-northeast-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-49-206.ap-northeast-2.compute.internal" is pending

Validation Failed
W0710 08:01:12.224549   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 6 lines ...
ip-172-20-54-164.ap-northeast-2.compute.internal	master	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-f45c4bf76-jvsvz	system-cluster-critical pod "coredns-f45c4bf76-jvsvz" is not ready (coredns)

Validation Failed
W0710 08:01:24.892180   12253 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 785 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 328 lines ...
STEP: Creating a kubernetes client
Jul 10 08:04:02.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
W0710 08:04:03.249988   12854 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 10 08:04:03.250: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name secret-emptykey-test-03a25358-fe86-49ff-9148-d57cb1a02ac7
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:03.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1638" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:04.224: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:04.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3553" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:05.126: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 62 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:202
STEP: Creating a pod with an ignorelisted, but not allowlisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:06.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-9474" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:07.408: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 73 lines ...
Jul 10 08:04:03.152: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Jul 10 08:04:03.629: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-a6650a17-0d06-4bdd-9aa2-a0ce60df77dc" in namespace "security-context-test-4352" to be "Succeeded or Failed"
Jul 10 08:04:03.787: INFO: Pod "busybox-readonly-true-a6650a17-0d06-4bdd-9aa2-a0ce60df77dc": Phase="Pending", Reason="", readiness=false. Elapsed: 158.064778ms
Jul 10 08:04:05.945: INFO: Pod "busybox-readonly-true-a6650a17-0d06-4bdd-9aa2-a0ce60df77dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316683739s
Jul 10 08:04:08.105: INFO: Pod "busybox-readonly-true-a6650a17-0d06-4bdd-9aa2-a0ce60df77dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47618287s
Jul 10 08:04:10.271: INFO: Pod "busybox-readonly-true-a6650a17-0d06-4bdd-9aa2-a0ce60df77dc": Phase="Failed", Reason="", readiness=false. Elapsed: 6.642743432s
Jul 10 08:04:10.272: INFO: Pod "busybox-readonly-true-a6650a17-0d06-4bdd-9aa2-a0ce60df77dc" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:10.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4352" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:8.414 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0710 08:04:03.165335   12894 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 10 08:04:03.165: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul 10 08:04:03.641: INFO: Waiting up to 5m0s for pod "pod-4bd5bee5-997a-4a1d-874c-2fedc7cc15f4" in namespace "emptydir-4101" to be "Succeeded or Failed"
Jul 10 08:04:03.799: INFO: Pod "pod-4bd5bee5-997a-4a1d-874c-2fedc7cc15f4": Phase="Pending", Reason="", readiness=false. Elapsed: 157.797328ms
Jul 10 08:04:05.958: INFO: Pod "pod-4bd5bee5-997a-4a1d-874c-2fedc7cc15f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316418448s
Jul 10 08:04:08.116: INFO: Pod "pod-4bd5bee5-997a-4a1d-874c-2fedc7cc15f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47528611s
Jul 10 08:04:10.275: INFO: Pod "pod-4bd5bee5-997a-4a1d-874c-2fedc7cc15f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.634145422s
STEP: Saw pod success
Jul 10 08:04:10.275: INFO: Pod "pod-4bd5bee5-997a-4a1d-874c-2fedc7cc15f4" satisfied condition "Succeeded or Failed"
Jul 10 08:04:10.433: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-4bd5bee5-997a-4a1d-874c-2fedc7cc15f4 container test-container: <nil>
STEP: delete the pod
Jul 10 08:04:11.043: INFO: Waiting for pod pod-4bd5bee5-997a-4a1d-874c-2fedc7cc15f4 to disappear
Jul 10 08:04:11.201: INFO: Pod pod-4bd5bee5-997a-4a1d-874c-2fedc7cc15f4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.150 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
• [SLOW TEST:9.483 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Deployment should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":1,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Jul 10 08:04:03.206: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-2c16e9da-ffe3-4e30-839a-9db8fc1a8952
STEP: Creating a pod to test consume configMaps
Jul 10 08:04:03.856: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-87361832-a9cd-40fe-ac62-801c732c1fbd" in namespace "projected-3108" to be "Succeeded or Failed"
Jul 10 08:04:04.022: INFO: Pod "pod-projected-configmaps-87361832-a9cd-40fe-ac62-801c732c1fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 165.217347ms
Jul 10 08:04:06.184: INFO: Pod "pod-projected-configmaps-87361832-a9cd-40fe-ac62-801c732c1fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327779457s
Jul 10 08:04:08.347: INFO: Pod "pod-projected-configmaps-87361832-a9cd-40fe-ac62-801c732c1fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490503498s
Jul 10 08:04:10.510: INFO: Pod "pod-projected-configmaps-87361832-a9cd-40fe-ac62-801c732c1fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.65322719s
Jul 10 08:04:12.673: INFO: Pod "pod-projected-configmaps-87361832-a9cd-40fe-ac62-801c732c1fbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.816253983s
STEP: Saw pod success
Jul 10 08:04:12.673: INFO: Pod "pod-projected-configmaps-87361832-a9cd-40fe-ac62-801c732c1fbd" satisfied condition "Succeeded or Failed"
Jul 10 08:04:12.834: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-projected-configmaps-87361832-a9cd-40fe-ac62-801c732c1fbd container agnhost-container: <nil>
STEP: delete the pod
Jul 10 08:04:13.171: INFO: Waiting for pod pod-projected-configmaps-87361832-a9cd-40fe-ac62-801c732c1fbd to disappear
Jul 10 08:04:13.332: INFO: Pod pod-projected-configmaps-87361832-a9cd-40fe-ac62-801c732c1fbd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.266 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:13.845: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:13.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5789" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull image [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:14.036: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 47 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:14.429: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 10 08:04:12.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7de49508-5b16-4d83-95b0-f3580abe7bbc" in namespace "projected-1713" to be "Succeeded or Failed"
Jul 10 08:04:12.288: INFO: Pod "downwardapi-volume-7de49508-5b16-4d83-95b0-f3580abe7bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 160.591798ms
Jul 10 08:04:14.449: INFO: Pod "downwardapi-volume-7de49508-5b16-4d83-95b0-f3580abe7bbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.322414992s
STEP: Saw pod success
Jul 10 08:04:14.449: INFO: Pod "downwardapi-volume-7de49508-5b16-4d83-95b0-f3580abe7bbc" satisfied condition "Succeeded or Failed"
Jul 10 08:04:14.612: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod downwardapi-volume-7de49508-5b16-4d83-95b0-f3580abe7bbc container client-container: <nil>
STEP: delete the pod
Jul 10 08:04:14.955: INFO: Waiting for pod downwardapi-volume-7de49508-5b16-4d83-95b0-f3580abe7bbc to disappear
Jul 10 08:04:15.115: INFO: Pod downwardapi-volume-7de49508-5b16-4d83-95b0-f3580abe7bbc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:15.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1713" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:15.480: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul 10 08:04:08.303: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul 10 08:04:08.303: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-ztxr
STEP: Creating a pod to test subpath
Jul 10 08:04:08.467: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-ztxr" in namespace "provisioning-9155" to be "Succeeded or Failed"
Jul 10 08:04:08.628: INFO: Pod "pod-subpath-test-inlinevolume-ztxr": Phase="Pending", Reason="", readiness=false. Elapsed: 161.069387ms
Jul 10 08:04:10.791: INFO: Pod "pod-subpath-test-inlinevolume-ztxr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323632609s
Jul 10 08:04:12.954: INFO: Pod "pod-subpath-test-inlinevolume-ztxr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.487182323s
Jul 10 08:04:15.116: INFO: Pod "pod-subpath-test-inlinevolume-ztxr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.649523687s
STEP: Saw pod success
Jul 10 08:04:15.116: INFO: Pod "pod-subpath-test-inlinevolume-ztxr" satisfied condition "Succeeded or Failed"
Jul 10 08:04:15.278: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-ztxr container test-container-volume-inlinevolume-ztxr: <nil>
STEP: delete the pod
Jul 10 08:04:15.617: INFO: Waiting for pod pod-subpath-test-inlinevolume-ztxr to disappear
Jul 10 08:04:15.798: INFO: Pod pod-subpath-test-inlinevolume-ztxr no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-ztxr
Jul 10 08:04:15.798: INFO: Deleting pod "pod-subpath-test-inlinevolume-ztxr" in namespace "provisioning-9155"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0710 08:04:04.104126   13008 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 10 08:04:04.104: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 10 08:04:04.593: INFO: Waiting up to 5m0s for pod "pod-ed24d76e-34ee-4a6a-8d24-ca8427d21e13" in namespace "emptydir-7202" to be "Succeeded or Failed"
Jul 10 08:04:04.756: INFO: Pod "pod-ed24d76e-34ee-4a6a-8d24-ca8427d21e13": Phase="Pending", Reason="", readiness=false. Elapsed: 162.659988ms
Jul 10 08:04:06.921: INFO: Pod "pod-ed24d76e-34ee-4a6a-8d24-ca8427d21e13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327163718s
Jul 10 08:04:09.084: INFO: Pod "pod-ed24d76e-34ee-4a6a-8d24-ca8427d21e13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490980169s
Jul 10 08:04:11.249: INFO: Pod "pod-ed24d76e-34ee-4a6a-8d24-ca8427d21e13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.655395681s
Jul 10 08:04:13.412: INFO: Pod "pod-ed24d76e-34ee-4a6a-8d24-ca8427d21e13": Phase="Pending", Reason="", readiness=false. Elapsed: 8.818400455s
Jul 10 08:04:15.575: INFO: Pod "pod-ed24d76e-34ee-4a6a-8d24-ca8427d21e13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.981620759s
STEP: Saw pod success
Jul 10 08:04:15.575: INFO: Pod "pod-ed24d76e-34ee-4a6a-8d24-ca8427d21e13" satisfied condition "Succeeded or Failed"
Jul 10 08:04:15.757: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-ed24d76e-34ee-4a6a-8d24-ca8427d21e13 container test-container: <nil>
STEP: delete the pod
Jul 10 08:04:16.088: INFO: Waiting for pod pod-ed24d76e-34ee-4a6a-8d24-ca8427d21e13 to disappear
Jul 10 08:04:16.250: INFO: Pod pod-ed24d76e-34ee-4a6a-8d24-ca8427d21e13 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 12 lines ...
STEP: Creating a kubernetes client
Jul 10 08:04:14.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[It] should ignore not found error with --for=delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1836
STEP: calling kubectl wait --for=delete
Jul 10 08:04:15.245: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6936 wait --for=delete pod/doesnotexist'
Jul 10 08:04:16.001: INFO: stderr: ""
Jul 10 08:04:16.001: INFO: stdout: ""
Jul 10 08:04:16.001: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6936 wait --for=delete pod --selector=app.kubernetes.io/name=noexist'
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:16.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6936" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete","total":-1,"completed":3,"skipped":20,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Jul 10 08:04:05.149: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-45ab2d58-b8cb-4e57-9630-0b1c560fc0f5
STEP: Creating a pod to test consume configMaps
Jul 10 08:04:05.805: INFO: Waiting up to 5m0s for pod "pod-configmaps-05503f8b-61af-4404-a568-5cc083d00c57" in namespace "configmap-6956" to be "Succeeded or Failed"
Jul 10 08:04:05.967: INFO: Pod "pod-configmaps-05503f8b-61af-4404-a568-5cc083d00c57": Phase="Pending", Reason="", readiness=false. Elapsed: 161.453048ms
Jul 10 08:04:08.130: INFO: Pod "pod-configmaps-05503f8b-61af-4404-a568-5cc083d00c57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324536099s
Jul 10 08:04:10.292: INFO: Pod "pod-configmaps-05503f8b-61af-4404-a568-5cc083d00c57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.486502271s
Jul 10 08:04:12.454: INFO: Pod "pod-configmaps-05503f8b-61af-4404-a568-5cc083d00c57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.648611254s
Jul 10 08:04:14.616: INFO: Pod "pod-configmaps-05503f8b-61af-4404-a568-5cc083d00c57": Phase="Pending", Reason="", readiness=false. Elapsed: 8.810528388s
Jul 10 08:04:16.778: INFO: Pod "pod-configmaps-05503f8b-61af-4404-a568-5cc083d00c57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.972335913s
STEP: Saw pod success
Jul 10 08:04:16.778: INFO: Pod "pod-configmaps-05503f8b-61af-4404-a568-5cc083d00c57" satisfied condition "Succeeded or Failed"
Jul 10 08:04:16.939: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-configmaps-05503f8b-61af-4404-a568-5cc083d00c57 container agnhost-container: <nil>
STEP: delete the pod
Jul 10 08:04:17.268: INFO: Waiting for pod pod-configmaps-05503f8b-61af-4404-a568-5cc083d00c57 to disappear
Jul 10 08:04:17.429: INFO: Pod pod-configmaps-05503f8b-61af-4404-a568-5cc083d00c57 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 16 lines ...
Jul 10 08:04:03.197: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-5789/configmap-test-57515215-feb7-4841-9dd5-ea5d55aed993
STEP: Creating a pod to test consume configMaps
Jul 10 08:04:03.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a" in namespace "configmap-5789" to be "Succeeded or Failed"
Jul 10 08:04:03.999: INFO: Pod "pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a": Phase="Pending", Reason="", readiness=false. Elapsed: 167.298167ms
Jul 10 08:04:06.159: INFO: Pod "pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326914378s
Jul 10 08:04:08.318: INFO: Pod "pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485662829s
Jul 10 08:04:10.476: INFO: Pod "pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.643777562s
Jul 10 08:04:12.634: INFO: Pod "pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.802327995s
Jul 10 08:04:14.793: INFO: Pod "pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.96096099s
Jul 10 08:04:16.953: INFO: Pod "pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.121367266s
STEP: Saw pod success
Jul 10 08:04:16.953: INFO: Pod "pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a" satisfied condition "Succeeded or Failed"
Jul 10 08:04:17.112: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a container env-test: <nil>
STEP: delete the pod
Jul 10 08:04:17.440: INFO: Waiting for pod pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a to disappear
Jul 10 08:04:17.598: INFO: Pod pod-configmaps-6aee89f9-7afc-49c5-b4e4-dbc2c885096a no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:15.516 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:18.103: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 20 lines ...
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:04:18.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Jul 10 08:04:19.066: INFO: found topology map[topology.kubernetes.io/zone:ap-northeast-2a]
Jul 10 08:04:19.066: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jul 10 08:04:19.066: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 128 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470
    should add annotations for pods in rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:04:14.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 10 08:04:14.983: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-63358799-6883-453f-8ff4-1f0cb71a52ea" in namespace "security-context-test-2334" to be "Succeeded or Failed"
Jul 10 08:04:15.140: INFO: Pod "busybox-readonly-false-63358799-6883-453f-8ff4-1f0cb71a52ea": Phase="Pending", Reason="", readiness=false. Elapsed: 157.817659ms
Jul 10 08:04:17.301: INFO: Pod "busybox-readonly-false-63358799-6883-453f-8ff4-1f0cb71a52ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317975264s
Jul 10 08:04:19.459: INFO: Pod "busybox-readonly-false-63358799-6883-453f-8ff4-1f0cb71a52ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476088552s
Jul 10 08:04:21.617: INFO: Pod "busybox-readonly-false-63358799-6883-453f-8ff4-1f0cb71a52ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.63484389s
Jul 10 08:04:21.618: INFO: Pod "busybox-readonly-false-63358799-6883-453f-8ff4-1f0cb71a52ea" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:21.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2334" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":23,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:04:17.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 10 08:04:18.905: INFO: Waiting up to 5m0s for pod "pod-9b5611c3-7501-4e28-8feb-9e16100bf6a1" in namespace "emptydir-9556" to be "Succeeded or Failed"
Jul 10 08:04:19.072: INFO: Pod "pod-9b5611c3-7501-4e28-8feb-9e16100bf6a1": Phase="Pending", Reason="", readiness=false. Elapsed: 166.702628ms
Jul 10 08:04:21.234: INFO: Pod "pod-9b5611c3-7501-4e28-8feb-9e16100bf6a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.329537905s
STEP: Saw pod success
Jul 10 08:04:21.235: INFO: Pod "pod-9b5611c3-7501-4e28-8feb-9e16100bf6a1" satisfied condition "Succeeded or Failed"
Jul 10 08:04:21.396: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod pod-9b5611c3-7501-4e28-8feb-9e16100bf6a1 container test-container: <nil>
STEP: delete the pod
Jul 10 08:04:21.731: INFO: Waiting for pod pod-9b5611c3-7501-4e28-8feb-9e16100bf6a1 to disappear
Jul 10 08:04:21.892: INFO: Pod pod-9b5611c3-7501-4e28-8feb-9e16100bf6a1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:21.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9556" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:22.411: INFO: Driver local doesn't support ext3 -- skipping
... skipping 35 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:04:16.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:23.558: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 18 lines ...
Jul 10 08:04:16.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 10 08:04:22.952: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
Jul 10 08:04:17.220: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 10 08:04:17.220: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5501 describe pod agnhost-primary-kpfhn'
Jul 10 08:04:18.144: INFO: stderr: ""
Jul 10 08:04:18.144: INFO: stdout: "Name:         agnhost-primary-kpfhn\nNamespace:    kubectl-5501\nPriority:     0\nNode:         ip-172-20-35-182.ap-northeast-2.compute.internal/172.20.35.182\nStart Time:   Sat, 10 Jul 2021 08:04:05 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.5.10\nIPs:\n  IP:           100.96.5.10\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://4a4d6938afbd789f0aaab30208e6dc85feefa245c5aef9e3eebd3773153b6aeb\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 10 Jul 2021 08:04:15 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7n56 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-b7n56:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  13s   default-scheduler  Successfully assigned kubectl-5501/agnhost-primary-kpfhn to ip-172-20-35-182.ap-northeast-2.compute.internal\n  Normal  Pulling    11s   kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     3s    kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 7.443333446s\n  Normal  Created    3s    kubelet            Created container agnhost-primary\n  Normal  Started    3s    kubelet            Started container agnhost-primary\n"
Jul 10 08:04:18.144: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5501 describe rc agnhost-primary'
Jul 10 08:04:19.222: INFO: stderr: ""
Jul 10 08:04:19.223: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-5501\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  14s   replication-controller  Created pod: agnhost-primary-kpfhn\n"
Jul 10 08:04:19.223: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5501 describe service agnhost-primary'
Jul 10 08:04:20.318: INFO: stderr: ""
Jul 10 08:04:20.318: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-5501\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.64.108.183\nIPs:               100.64.108.183\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.5.10:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jul 10 08:04:20.647: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5501 describe node ip-172-20-35-182.ap-northeast-2.compute.internal'
Jul 10 08:04:22.348: INFO: stderr: ""
Jul 10 08:04:22.348: INFO: stdout: "Name:               ip-172-20-35-182.ap-northeast-2.compute.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=t3.medium\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=ap-northeast-2\n                    failure-domain.beta.kubernetes.io/zone=ap-northeast-2a\n                    kops.k8s.io/instancegroup=nodes-ap-northeast-2a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-35-182.ap-northeast-2.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\n                    node.kubernetes.io/instance-type=t3.medium\n                    topology.ebs.csi.aws.com/zone=ap-northeast-2a\n                    topology.kubernetes.io/region=ap-northeast-2\n                    topology.kubernetes.io/zone=ap-northeast-2a\nAnnotations:        csi.volume.kubernetes.io/nodeid: {\"ebs.csi.aws.com\":\"i-07e3a6916f931b901\"}\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 10 Jul 2021 08:00:56 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-35-182.ap-northeast-2.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Sat, 10 Jul 2021 08:04:21 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sat, 10 Jul 2021 08:01:26 +0000   Sat, 10 Jul 2021 08:00:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sat, 10 Jul 2021 08:01:26 +0000   Sat, 10 Jul 2021 08:00:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sat, 10 Jul 2021 08:01:26 +0000   Sat, 10 Jul 2021 08:00:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sat, 10 Jul 2021 08:01:26 +0000   Sat, 10 Jul 2021 08:01:06 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   172.20.35.182\n  ExternalIP:   3.36.67.24\n  InternalDNS:  ip-172-20-35-182.ap-northeast-2.compute.internal\n  Hostname:     ip-172-20-35-182.ap-northeast-2.compute.internal\n  ExternalDNS:  ec2-3-36-67-24.ap-northeast-2.compute.amazonaws.com\nCapacity:\n  cpu:                2\n  ephemeral-storage:  48725632Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3968640Ki\n  pods:               110\nAllocatable:\n  cpu:                2\n  ephemeral-storage:  44905542377\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3866240Ki\n  pods:               110\nSystem Info:\n  Machine ID:                  ec24797d4aec87f81c975a28ffae5bb1\n  System UUID:                 ec24797d-4aec-87f8-1c97-5a28ffae5bb1\n  Boot ID:                     56033625-3348-442f-9130-e62e8679b3df\n  Kernel Version:              5.8.0-1038-aws\n  OS Image:                    Ubuntu 20.04.2 LTS\n  Operating System:            linux\n  Architecture:                amd64\n  Container Runtime Version:   containerd://1.4.6\n  Kubelet Version:             v1.22.0-beta.1\n  Kube-Proxy Version:          v1.22.0-beta.1\nPodCIDR:                       100.96.5.0/24\nPodCIDRs:                      100.96.5.0/24\nProviderID:                    aws:///ap-northeast-2a/i-07e3a6916f931b901\nNon-terminated Pods:           (13 in total)\n  Namespace                    Name                                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                    ----                                                                 ------------  ----------  ---------------  -------------  ---\n  container-probe-3001         busybox-bbab7eb6-0794-4d1d-8607-6bf71bedef7d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s\n  container-runtime-9723       termination-message-container5a46e0d5-17e3-44f3-afdc-2f459f355fcf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s\n  kube-system                  ebs-csi-node-mbk4r                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s\n  kube-system                  kube-proxy-ip-172-20-35-182.ap-northeast-2.compute.internal          100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m17s\n  kubectl-5501                 agnhost-primary-kpfhn                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s\n  nettest-5961                 netserver-0                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s\n  port-forwarding-3096         pfpod                                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s\n  provisioning-6061            hostpath-symlink-prep-provisioning-6061                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s\n  provisioning-7677            pod-subpath-test-inlinevolume-xwq4                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s\n  replication-controller-5092  my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s\n  replication-controller-6626  rc-test-47hl2                                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s\n  services-3497                service-headless-toggled-q2jqj                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s\n  services-3497                service-headless-zg467                                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests   Limits\n  --------           --------   ------\n  cpu                100m (5%)  0 (0%)\n  memory             0 (0%)     0 (0%)\n  ephemeral-storage  0 (0%)     0 (0%)\n  hugepages-1Gi      0 (0%)     0 (0%)\n  hugepages-2Mi      0 (0%)     0 (0%)\nEvents:\n  Type     Reason                   Age                    From     Message\n  ----     ------                   ----                   ----     -------\n  Normal   Starting                 4m27s                  kubelet  Starting kubelet.\n  Warning  InvalidDiskCapacity      4m27s                  kubelet  invalid capacity 0 on image filesystem\n  Normal   NodeAllocatableEnforced  4m27s                  kubelet  Updated Node Allocatable limit across pods\n  Normal   NodeHasNoDiskPressure    3m57s (x7 over 4m27s)  kubelet  Node ip-172-20-35-182.ap-northeast-2.compute.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     3m57s (x7 over 4m27s)  kubelet  Node ip-172-20-35-182.ap-northeast-2.compute.internal status is now: NodeHasSufficientPID\n  Normal   NodeHasSufficientMemory  3m26s (x8 over 4m27s)  kubelet  Node ip-172-20-35-182.ap-northeast-2.compute.internal status is now: NodeHasSufficientMemory\n"
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1094
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:23.987: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 111 lines ...
STEP: Destroying namespace "pod-disks-4463" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.159 seconds]
[sig-storage] Pod Disks
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449

  Requires at least 2 nodes (not 0)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75
------------------------------
... skipping 145 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:26.655: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:04:22.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename discovery
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 82 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:26.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-1407" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:26.731: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":2,"skipped":8,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:26.854: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
... skipping 150 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:27.763: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 131 lines ...
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:04:26.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Jul 10 08:04:27.683: INFO: found topology map[topology.kubernetes.io/zone:ap-northeast-2a]
Jul 10 08:04:27.683: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jul 10 08:04:27.683: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 39 lines ...
• [SLOW TEST:25.848 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should test the lifecycle of a ReplicationController [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:28.437: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:30.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-372" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:30.653: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 69 lines ...
Jul 10 08:04:27.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 10 08:04:27.956: INFO: Waiting up to 5m0s for pod "pod-1ab08b39-0fac-4ef9-aceb-478060ee35c7" in namespace "emptydir-2428" to be "Succeeded or Failed"
Jul 10 08:04:28.114: INFO: Pod "pod-1ab08b39-0fac-4ef9-aceb-478060ee35c7": Phase="Pending", Reason="", readiness=false. Elapsed: 157.62341ms
Jul 10 08:04:30.272: INFO: Pod "pod-1ab08b39-0fac-4ef9-aceb-478060ee35c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.315443772s
STEP: Saw pod success
Jul 10 08:04:30.272: INFO: Pod "pod-1ab08b39-0fac-4ef9-aceb-478060ee35c7" satisfied condition "Succeeded or Failed"
Jul 10 08:04:30.429: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-1ab08b39-0fac-4ef9-aceb-478060ee35c7 container test-container: <nil>
STEP: delete the pod
Jul 10 08:04:30.764: INFO: Waiting for pod pod-1ab08b39-0fac-4ef9-aceb-478060ee35c7 to disappear
Jul 10 08:04:30.922: INFO: Pod pod-1ab08b39-0fac-4ef9-aceb-478060ee35c7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:30.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2428" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:31.266: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 130 lines ...
W0710 08:04:03.498785   12880 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 10 08:04:03.498: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:360
Jul 10 08:04:03.821: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 10 08:04:04.306: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7677" in namespace "provisioning-7677" to be "Succeeded or Failed"
Jul 10 08:04:04.470: INFO: Pod "hostpath-symlink-prep-provisioning-7677": Phase="Pending", Reason="", readiness=false. Elapsed: 163.882518ms
Jul 10 08:04:06.633: INFO: Pod "hostpath-symlink-prep-provisioning-7677": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327120188s
Jul 10 08:04:08.795: INFO: Pod "hostpath-symlink-prep-provisioning-7677": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488834599s
Jul 10 08:04:10.958: INFO: Pod "hostpath-symlink-prep-provisioning-7677": Phase="Pending", Reason="", readiness=false. Elapsed: 6.651342871s
Jul 10 08:04:13.126: INFO: Pod "hostpath-symlink-prep-provisioning-7677": Phase="Pending", Reason="", readiness=false. Elapsed: 8.819990814s
Jul 10 08:04:15.288: INFO: Pod "hostpath-symlink-prep-provisioning-7677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.981376689s
STEP: Saw pod success
Jul 10 08:04:15.288: INFO: Pod "hostpath-symlink-prep-provisioning-7677" satisfied condition "Succeeded or Failed"
Jul 10 08:04:15.288: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7677" in namespace "provisioning-7677"
Jul 10 08:04:15.455: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7677" to be fully deleted
Jul 10 08:04:15.616: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-xwq4
STEP: Creating a pod to test subpath
Jul 10 08:04:15.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-xwq4" in namespace "provisioning-7677" to be "Succeeded or Failed"
Jul 10 08:04:15.958: INFO: Pod "pod-subpath-test-inlinevolume-xwq4": Phase="Pending", Reason="", readiness=false. Elapsed: 161.139818ms
Jul 10 08:04:18.126: INFO: Pod "pod-subpath-test-inlinevolume-xwq4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329170863s
Jul 10 08:04:20.305: INFO: Pod "pod-subpath-test-inlinevolume-xwq4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508049488s
Jul 10 08:04:22.467: INFO: Pod "pod-subpath-test-inlinevolume-xwq4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.669833226s
STEP: Saw pod success
Jul 10 08:04:22.467: INFO: Pod "pod-subpath-test-inlinevolume-xwq4" satisfied condition "Succeeded or Failed"
Jul 10 08:04:22.630: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-xwq4 container test-container-subpath-inlinevolume-xwq4: <nil>
STEP: delete the pod
Jul 10 08:04:23.205: INFO: Waiting for pod pod-subpath-test-inlinevolume-xwq4 to disappear
Jul 10 08:04:23.366: INFO: Pod pod-subpath-test-inlinevolume-xwq4 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-xwq4
Jul 10 08:04:23.367: INFO: Deleting pod "pod-subpath-test-inlinevolume-xwq4" in namespace "provisioning-7677"
STEP: Deleting pod
Jul 10 08:04:23.528: INFO: Deleting pod "pod-subpath-test-inlinevolume-xwq4" in namespace "provisioning-7677"
Jul 10 08:04:23.851: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7677" in namespace "provisioning-7677" to be "Succeeded or Failed"
Jul 10 08:04:24.012: INFO: Pod "hostpath-symlink-prep-provisioning-7677": Phase="Pending", Reason="", readiness=false. Elapsed: 160.913198ms
Jul 10 08:04:26.175: INFO: Pod "hostpath-symlink-prep-provisioning-7677": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323785618s
Jul 10 08:04:28.338: INFO: Pod "hostpath-symlink-prep-provisioning-7677": Phase="Pending", Reason="", readiness=false. Elapsed: 4.486192429s
Jul 10 08:04:30.501: INFO: Pod "hostpath-symlink-prep-provisioning-7677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.649541501s
STEP: Saw pod success
Jul 10 08:04:30.501: INFO: Pod "hostpath-symlink-prep-provisioning-7677" satisfied condition "Succeeded or Failed"
Jul 10 08:04:30.501: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7677" in namespace "provisioning-7677"
Jul 10 08:04:30.674: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7677" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:30.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7677" for this suite.
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:31.352: INFO: Only supported for providers [vsphere] (not aws)
... skipping 44 lines ...
Jul 10 08:04:18.292: INFO: PersistentVolumeClaim pvc-zjh92 found but phase is Pending instead of Bound.
Jul 10 08:04:20.453: INFO: PersistentVolumeClaim pvc-zjh92 found and phase=Bound (8.784420003s)
Jul 10 08:04:20.453: INFO: Waiting up to 3m0s for PersistentVolume local-9p6rn to have phase Bound
Jul 10 08:04:20.609: INFO: PersistentVolume local-9p6rn found and phase=Bound (155.933219ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rd7q
STEP: Creating a pod to test subpath
Jul 10 08:04:21.077: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rd7q" in namespace "provisioning-3507" to be "Succeeded or Failed"
Jul 10 08:04:21.232: INFO: Pod "pod-subpath-test-preprovisionedpv-rd7q": Phase="Pending", Reason="", readiness=false. Elapsed: 154.72658ms
Jul 10 08:04:23.393: INFO: Pod "pod-subpath-test-preprovisionedpv-rd7q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315923268s
Jul 10 08:04:25.549: INFO: Pod "pod-subpath-test-preprovisionedpv-rd7q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471908038s
Jul 10 08:04:27.705: INFO: Pod "pod-subpath-test-preprovisionedpv-rd7q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.62825888s
STEP: Saw pod success
Jul 10 08:04:27.705: INFO: Pod "pod-subpath-test-preprovisionedpv-rd7q" satisfied condition "Succeeded or Failed"
Jul 10 08:04:27.868: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-rd7q container test-container-subpath-preprovisionedpv-rd7q: <nil>
STEP: delete the pod
Jul 10 08:04:28.193: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rd7q to disappear
Jul 10 08:04:28.348: INFO: Pod pod-subpath-test-preprovisionedpv-rd7q no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rd7q
Jul 10 08:04:28.348: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rd7q" in namespace "provisioning-3507"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:360
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:31.743: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:31.972: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:32.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9317" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:32.874: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Jul 10 08:04:25.721: INFO: Waiting up to 5m0s for pod "pod-81322945-25ab-4d5c-9417-d8b0daed2f31" in namespace "emptydir-7678" to be "Succeeded or Failed"
Jul 10 08:04:25.884: INFO: Pod "pod-81322945-25ab-4d5c-9417-d8b0daed2f31": Phase="Pending", Reason="", readiness=false. Elapsed: 162.420668ms
Jul 10 08:04:28.044: INFO: Pod "pod-81322945-25ab-4d5c-9417-d8b0daed2f31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322769459s
Jul 10 08:04:30.204: INFO: Pod "pod-81322945-25ab-4d5c-9417-d8b0daed2f31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483383371s
Jul 10 08:04:32.367: INFO: Pod "pod-81322945-25ab-4d5c-9417-d8b0daed2f31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.645417064s
STEP: Saw pod success
Jul 10 08:04:32.367: INFO: Pod "pod-81322945-25ab-4d5c-9417-d8b0daed2f31" satisfied condition "Succeeded or Failed"
Jul 10 08:04:32.527: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-81322945-25ab-4d5c-9417-d8b0daed2f31 container test-container: <nil>
STEP: delete the pod
Jul 10 08:04:32.862: INFO: Waiting for pod pod-81322945-25ab-4d5c-9417-d8b0daed2f31 to disappear
Jul 10 08:04:33.022: INFO: Pod pod-81322945-25ab-4d5c-9417-d8b0daed2f31 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on default medium should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":3,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:33.357: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:33.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-8757" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:33.574: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 26 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 27 lines ...
• [SLOW TEST:9.201 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support memory backed volumes of specified size
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":2,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:34.621: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 133 lines ...
STEP: Destroying namespace "services-9143" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":3,"skipped":37,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:36.846: INFO: Only supported for providers [openstack] (not aws)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 10 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul 10 08:04:05.358: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul 10 08:04:05.358: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-k8wn
STEP: Creating a pod to test atomic-volume-subpath
Jul 10 08:04:05.529: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-k8wn" in namespace "provisioning-405" to be "Succeeded or Failed"
Jul 10 08:04:05.689: INFO: Pod "pod-subpath-test-inlinevolume-k8wn": Phase="Pending", Reason="", readiness=false. Elapsed: 160.103837ms
Jul 10 08:04:07.851: INFO: Pod "pod-subpath-test-inlinevolume-k8wn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322010649s
Jul 10 08:04:10.013: INFO: Pod "pod-subpath-test-inlinevolume-k8wn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48375347s
Jul 10 08:04:12.174: INFO: Pod "pod-subpath-test-inlinevolume-k8wn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.645059893s
Jul 10 08:04:14.335: INFO: Pod "pod-subpath-test-inlinevolume-k8wn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.806564567s
Jul 10 08:04:16.497: INFO: Pod "pod-subpath-test-inlinevolume-k8wn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.967949023s
... skipping 4 lines ...
Jul 10 08:04:27.305: INFO: Pod "pod-subpath-test-inlinevolume-k8wn": Phase="Running", Reason="", readiness=true. Elapsed: 21.775991244s
Jul 10 08:04:29.465: INFO: Pod "pod-subpath-test-inlinevolume-k8wn": Phase="Running", Reason="", readiness=true. Elapsed: 23.936436036s
Jul 10 08:04:31.627: INFO: Pod "pod-subpath-test-inlinevolume-k8wn": Phase="Running", Reason="", readiness=true. Elapsed: 26.097908878s
Jul 10 08:04:33.789: INFO: Pod "pod-subpath-test-inlinevolume-k8wn": Phase="Running", Reason="", readiness=true. Elapsed: 28.260329141s
Jul 10 08:04:35.952: INFO: Pod "pod-subpath-test-inlinevolume-k8wn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.423339335s
STEP: Saw pod success
Jul 10 08:04:35.953: INFO: Pod "pod-subpath-test-inlinevolume-k8wn" satisfied condition "Succeeded or Failed"
Jul 10 08:04:36.113: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-k8wn container test-container-subpath-inlinevolume-k8wn: <nil>
STEP: delete the pod
Jul 10 08:04:36.510: INFO: Waiting for pod pod-subpath-test-inlinevolume-k8wn to disappear
Jul 10 08:04:36.671: INFO: Pod pod-subpath-test-inlinevolume-k8wn no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-k8wn
Jul 10 08:04:36.671: INFO: Deleting pod "pod-subpath-test-inlinevolume-k8wn" in namespace "provisioning-405"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Jul 10 08:04:19.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul 10 08:04:20.347: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 10 08:04:20.668: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6061" in namespace "provisioning-6061" to be "Succeeded or Failed"
Jul 10 08:04:20.826: INFO: Pod "hostpath-symlink-prep-provisioning-6061": Phase="Pending", Reason="", readiness=false. Elapsed: 158.343688ms
Jul 10 08:04:22.987: INFO: Pod "hostpath-symlink-prep-provisioning-6061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318717317s
Jul 10 08:04:25.145: INFO: Pod "hostpath-symlink-prep-provisioning-6061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.477331716s
STEP: Saw pod success
Jul 10 08:04:25.145: INFO: Pod "hostpath-symlink-prep-provisioning-6061" satisfied condition "Succeeded or Failed"
Jul 10 08:04:25.145: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6061" in namespace "provisioning-6061"
Jul 10 08:04:25.311: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6061" to be fully deleted
Jul 10 08:04:25.468: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-sxk7
STEP: Creating a pod to test subpath
Jul 10 08:04:25.627: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-sxk7" in namespace "provisioning-6061" to be "Succeeded or Failed"
Jul 10 08:04:25.786: INFO: Pod "pod-subpath-test-inlinevolume-sxk7": Phase="Pending", Reason="", readiness=false. Elapsed: 157.960019ms
Jul 10 08:04:27.944: INFO: Pod "pod-subpath-test-inlinevolume-sxk7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31678104s
Jul 10 08:04:30.103: INFO: Pod "pod-subpath-test-inlinevolume-sxk7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475870412s
Jul 10 08:04:32.266: INFO: Pod "pod-subpath-test-inlinevolume-sxk7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.638500125s
STEP: Saw pod success
Jul 10 08:04:32.266: INFO: Pod "pod-subpath-test-inlinevolume-sxk7" satisfied condition "Succeeded or Failed"
Jul 10 08:04:32.424: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-sxk7 container test-container-volume-inlinevolume-sxk7: <nil>
STEP: delete the pod
Jul 10 08:04:32.749: INFO: Waiting for pod pod-subpath-test-inlinevolume-sxk7 to disappear
Jul 10 08:04:32.907: INFO: Pod pod-subpath-test-inlinevolume-sxk7 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-sxk7
Jul 10 08:04:32.907: INFO: Deleting pod "pod-subpath-test-inlinevolume-sxk7" in namespace "provisioning-6061"
STEP: Deleting pod
Jul 10 08:04:33.069: INFO: Deleting pod "pod-subpath-test-inlinevolume-sxk7" in namespace "provisioning-6061"
Jul 10 08:04:33.385: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6061" in namespace "provisioning-6061" to be "Succeeded or Failed"
Jul 10 08:04:33.545: INFO: Pod "hostpath-symlink-prep-provisioning-6061": Phase="Pending", Reason="", readiness=false. Elapsed: 160.103359ms
Jul 10 08:04:35.710: INFO: Pod "hostpath-symlink-prep-provisioning-6061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324521682s
Jul 10 08:04:37.873: INFO: Pod "hostpath-symlink-prep-provisioning-6061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.487763007s
STEP: Saw pod success
Jul 10 08:04:37.873: INFO: Pod "hostpath-symlink-prep-provisioning-6061" satisfied condition "Succeeded or Failed"
Jul 10 08:04:37.873: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6061" in namespace "provisioning-6061"
Jul 10 08:04:38.038: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6061" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:38.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6061" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:38.536: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 122 lines ...
• [SLOW TEST:7.471 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 99 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":2,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 101 lines ...
Jul 10 08:04:33.476: INFO: PersistentVolumeClaim pvc-2tmq2 found but phase is Pending instead of Bound.
Jul 10 08:04:35.636: INFO: PersistentVolumeClaim pvc-2tmq2 found and phase=Bound (13.162412265s)
Jul 10 08:04:35.636: INFO: Waiting up to 3m0s for PersistentVolume local-7sqvt to have phase Bound
Jul 10 08:04:35.796: INFO: PersistentVolume local-7sqvt found and phase=Bound (159.448218ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-kzww
STEP: Creating a pod to test subpath
Jul 10 08:04:36.281: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kzww" in namespace "provisioning-3100" to be "Succeeded or Failed"
Jul 10 08:04:36.477: INFO: Pod "pod-subpath-test-preprovisionedpv-kzww": Phase="Pending", Reason="", readiness=false. Elapsed: 195.478624ms
Jul 10 08:04:38.644: INFO: Pod "pod-subpath-test-preprovisionedpv-kzww": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363081689s
Jul 10 08:04:40.805: INFO: Pod "pod-subpath-test-preprovisionedpv-kzww": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.524010695s
STEP: Saw pod success
Jul 10 08:04:40.805: INFO: Pod "pod-subpath-test-preprovisionedpv-kzww" satisfied condition "Succeeded or Failed"
Jul 10 08:04:40.965: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-kzww container test-container-volume-preprovisionedpv-kzww: <nil>
STEP: delete the pod
Jul 10 08:04:41.296: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kzww to disappear
Jul 10 08:04:41.459: INFO: Pod pod-subpath-test-preprovisionedpv-kzww no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kzww
Jul 10 08:04:41.459: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kzww" in namespace "provisioning-3100"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":17,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:43.743: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 44 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-25ba1ede-0982-40b4-9581-c51706182239
STEP: Creating a pod to test consume configMaps
Jul 10 08:04:43.519: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8ca81d62-189c-4c44-a909-ff444e764880" in namespace "projected-3274" to be "Succeeded or Failed"
Jul 10 08:04:43.681: INFO: Pod "pod-projected-configmaps-8ca81d62-189c-4c44-a909-ff444e764880": Phase="Pending", Reason="", readiness=false. Elapsed: 161.974389ms
Jul 10 08:04:45.850: INFO: Pod "pod-projected-configmaps-8ca81d62-189c-4c44-a909-ff444e764880": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.331096017s
STEP: Saw pod success
Jul 10 08:04:45.850: INFO: Pod "pod-projected-configmaps-8ca81d62-189c-4c44-a909-ff444e764880" satisfied condition "Succeeded or Failed"
Jul 10 08:04:46.015: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-projected-configmaps-8ca81d62-189c-4c44-a909-ff444e764880 container agnhost-container: <nil>
STEP: delete the pod
Jul 10 08:04:46.345: INFO: Waiting for pod pod-projected-configmaps-8ca81d62-189c-4c44-a909-ff444e764880 to disappear
Jul 10 08:04:46.507: INFO: Pod pod-projected-configmaps-8ca81d62-189c-4c44-a909-ff444e764880 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:46.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3274" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 427 lines ...
• [SLOW TEST:16.590 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:47.407: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:51.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-7688" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:51.476: INFO: Only supported for providers [openstack] (not aws)
... skipping 92 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47
    should be mountable
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":5,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:54.949: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 124 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
Jul 10 08:04:54.910: INFO: The status of Pod pod-update-activedeadlineseconds-f0718f03-2319-4dfe-b708-dff019252e10 is Pending, waiting for it to be Running (with Ready = true)
Jul 10 08:04:56.882: INFO: The status of Pod pod-update-activedeadlineseconds-f0718f03-2319-4dfe-b708-dff019252e10 is Running (Ready = true)
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 10 08:04:58.036: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f0718f03-2319-4dfe-b708-dff019252e10"
Jul 10 08:04:58.036: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f0718f03-2319-4dfe-b708-dff019252e10" in namespace "pods-760" to be "terminated due to deadline exceeded"
Jul 10 08:04:58.199: INFO: Pod "pod-update-activedeadlineseconds-f0718f03-2319-4dfe-b708-dff019252e10": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 162.502049ms
Jul 10 08:04:58.199: INFO: Pod "pod-update-activedeadlineseconds-f0718f03-2319-4dfe-b708-dff019252e10" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:04:58.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-760" for this suite.


• [SLOW TEST:7.008 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:04:58.552: INFO: Only supported for providers [azure] (not aws)
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:05:02.916: INFO: Only supported for providers [azure] (not aws)
... skipping 111 lines ...
Jul 10 08:04:49.732: INFO: PersistentVolumeClaim pvc-zwgsc found but phase is Pending instead of Bound.
Jul 10 08:04:51.894: INFO: PersistentVolumeClaim pvc-zwgsc found and phase=Bound (4.485102882s)
Jul 10 08:04:51.894: INFO: Waiting up to 3m0s for PersistentVolume local-fl7h8 to have phase Bound
Jul 10 08:04:52.056: INFO: PersistentVolume local-fl7h8 found and phase=Bound (161.4761ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-j98h
STEP: Creating a pod to test subpath
Jul 10 08:04:52.568: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j98h" in namespace "provisioning-3648" to be "Succeeded or Failed"
Jul 10 08:04:52.737: INFO: Pod "pod-subpath-test-preprovisionedpv-j98h": Phase="Pending", Reason="", readiness=false. Elapsed: 168.992479ms
Jul 10 08:04:54.920: INFO: Pod "pod-subpath-test-preprovisionedpv-j98h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351698909s
Jul 10 08:04:57.081: INFO: Pod "pod-subpath-test-preprovisionedpv-j98h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.513494494s
Jul 10 08:04:59.244: INFO: Pod "pod-subpath-test-preprovisionedpv-j98h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.676141459s
STEP: Saw pod success
Jul 10 08:04:59.244: INFO: Pod "pod-subpath-test-preprovisionedpv-j98h" satisfied condition "Succeeded or Failed"
Jul 10 08:04:59.405: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-j98h container test-container-subpath-preprovisionedpv-j98h: <nil>
STEP: delete the pod
Jul 10 08:04:59.738: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j98h to disappear
Jul 10 08:04:59.899: INFO: Pod pod-subpath-test-preprovisionedpv-j98h no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j98h
Jul 10 08:04:59.899: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j98h" in namespace "provisioning-3648"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":32,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Jul 10 08:04:49.123: INFO: PersistentVolumeClaim pvc-xnq8n found but phase is Pending instead of Bound.
Jul 10 08:04:51.285: INFO: PersistentVolumeClaim pvc-xnq8n found and phase=Bound (4.483317241s)
Jul 10 08:04:51.285: INFO: Waiting up to 3m0s for PersistentVolume local-qqf49 to have phase Bound
Jul 10 08:04:51.445: INFO: PersistentVolume local-qqf49 found and phase=Bound (159.8474ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4dpf
STEP: Creating a pod to test subpath
Jul 10 08:04:51.936: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4dpf" in namespace "provisioning-5683" to be "Succeeded or Failed"
Jul 10 08:04:52.099: INFO: Pod "pod-subpath-test-preprovisionedpv-4dpf": Phase="Pending", Reason="", readiness=false. Elapsed: 162.570069ms
Jul 10 08:04:54.262: INFO: Pod "pod-subpath-test-preprovisionedpv-4dpf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326114992s
Jul 10 08:04:56.431: INFO: Pod "pod-subpath-test-preprovisionedpv-4dpf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495087715s
Jul 10 08:04:58.592: INFO: Pod "pod-subpath-test-preprovisionedpv-4dpf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.656003529s
STEP: Saw pod success
Jul 10 08:04:58.592: INFO: Pod "pod-subpath-test-preprovisionedpv-4dpf" satisfied condition "Succeeded or Failed"
Jul 10 08:04:58.754: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-4dpf container test-container-subpath-preprovisionedpv-4dpf: <nil>
STEP: delete the pod
Jul 10 08:04:59.093: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4dpf to disappear
Jul 10 08:04:59.253: INFO: Pod pod-subpath-test-preprovisionedpv-4dpf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4dpf
Jul 10 08:04:59.253: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4dpf" in namespace "provisioning-5683"
STEP: Creating pod pod-subpath-test-preprovisionedpv-4dpf
STEP: Creating a pod to test subpath
Jul 10 08:04:59.582: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4dpf" in namespace "provisioning-5683" to be "Succeeded or Failed"
Jul 10 08:04:59.742: INFO: Pod "pod-subpath-test-preprovisionedpv-4dpf": Phase="Pending", Reason="", readiness=false. Elapsed: 159.374059ms
Jul 10 08:05:01.907: INFO: Pod "pod-subpath-test-preprovisionedpv-4dpf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.324033774s
STEP: Saw pod success
Jul 10 08:05:01.907: INFO: Pod "pod-subpath-test-preprovisionedpv-4dpf" satisfied condition "Succeeded or Failed"
Jul 10 08:05:02.067: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-4dpf container test-container-subpath-preprovisionedpv-4dpf: <nil>
STEP: delete the pod
Jul 10 08:05:02.396: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4dpf to disappear
Jul 10 08:05:02.556: INFO: Pod pod-subpath-test-preprovisionedpv-4dpf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4dpf
Jul 10 08:05:02.556: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4dpf" in namespace "provisioning-5683"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:390
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:05:08.005: INFO: >>> kubeConfig: /root/.kube/config
[It] watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:46
Jul 10 08:05:08.006: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:05:08.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":4,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:05:08.502: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 180 lines ...
Jul 10 08:04:22.267: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-4n5c9] to have phase Bound
Jul 10 08:04:22.432: INFO: PersistentVolumeClaim pvc-4n5c9 found and phase=Bound (164.571617ms)
STEP: Deleting the previously created pod
Jul 10 08:04:31.265: INFO: Deleting pod "pvc-volume-tester-klz5g" in namespace "csi-mock-volumes-5346"
Jul 10 08:04:31.431: INFO: Wait up to 5m0s for pod "pvc-volume-tester-klz5g" to be fully deleted
STEP: Checking CSI driver logs
Jul 10 08:04:43.949: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/aafa842d-ac87-4236-9821-640423defa1f/volumes/kubernetes.io~csi/pvc-2a6ff502-746b-412f-bb6e-312ccbdd2081/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-klz5g
Jul 10 08:04:43.949: INFO: Deleting pod "pvc-volume-tester-klz5g" in namespace "csi-mock-volumes-5346"
STEP: Deleting claim pvc-4n5c9
Jul 10 08:04:44.472: INFO: Waiting up to 2m0s for PersistentVolume pvc-2a6ff502-746b-412f-bb6e-312ccbdd2081 to get deleted
Jul 10 08:04:44.638: INFO: PersistentVolume pvc-2a6ff502-746b-412f-bb6e-312ccbdd2081 was removed
STEP: Deleting storageclass csi-mock-volumes-5346-scxsldw
... skipping 54 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-89f13608-bb8f-4d8b-8fb1-41eb23cad536
STEP: Creating a pod to test consume configMaps
Jul 10 08:05:06.481: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d979748-5bfc-4b37-9b42-9b62e4d1de29" in namespace "configmap-9115" to be "Succeeded or Failed"
Jul 10 08:05:06.642: INFO: Pod "pod-configmaps-4d979748-5bfc-4b37-9b42-9b62e4d1de29": Phase="Pending", Reason="", readiness=false. Elapsed: 161.12393ms
Jul 10 08:05:08.804: INFO: Pod "pod-configmaps-4d979748-5bfc-4b37-9b42-9b62e4d1de29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.3229288s
STEP: Saw pod success
Jul 10 08:05:08.804: INFO: Pod "pod-configmaps-4d979748-5bfc-4b37-9b42-9b62e4d1de29" satisfied condition "Succeeded or Failed"
Jul 10 08:05:08.965: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-configmaps-4d979748-5bfc-4b37-9b42-9b62e4d1de29 container agnhost-container: <nil>
STEP: delete the pod
Jul 10 08:05:09.294: INFO: Waiting for pod pod-configmaps-4d979748-5bfc-4b37-9b42-9b62e4d1de29 to disappear
Jul 10 08:05:09.455: INFO: Pod pod-configmaps-4d979748-5bfc-4b37-9b42-9b62e4d1de29 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:05:09.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9115" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:05:09.791: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":2,"skipped":25,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:04:46.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
Jul 10 08:04:59.544: INFO: PersistentVolumeClaim pvc-x6cvj found and phase=Bound (159.69119ms)
Jul 10 08:04:59.544: INFO: Waiting up to 3m0s for PersistentVolume nfs-shtkt to have phase Bound
Jul 10 08:04:59.705: INFO: PersistentVolume nfs-shtkt found and phase=Bound (160.785539ms)
STEP: Checking pod has write access to PersistentVolume
Jul 10 08:05:00.024: INFO: Creating nfs test pod
Jul 10 08:05:00.186: INFO: Pod should terminate with exitcode 0 (success)
Jul 10 08:05:00.186: INFO: Waiting up to 5m0s for pod "pvc-tester-hn2j4" in namespace "pv-3510" to be "Succeeded or Failed"
Jul 10 08:05:00.346: INFO: Pod "pvc-tester-hn2j4": Phase="Pending", Reason="", readiness=false. Elapsed: 159.9135ms
Jul 10 08:05:02.508: INFO: Pod "pvc-tester-hn2j4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321539166s
STEP: Saw pod success
Jul 10 08:05:02.508: INFO: Pod "pvc-tester-hn2j4" satisfied condition "Succeeded or Failed"
Jul 10 08:05:02.508: INFO: Pod pvc-tester-hn2j4 succeeded 
Jul 10 08:05:02.508: INFO: Deleting pod "pvc-tester-hn2j4" in namespace "pv-3510"
Jul 10 08:05:02.672: INFO: Wait up to 5m0s for pod "pvc-tester-hn2j4" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jul 10 08:05:02.832: INFO: Deleting PVC pvc-x6cvj to trigger reclamation of PV 
Jul 10 08:05:02.832: INFO: Deleting PersistentVolumeClaim "pvc-x6cvj"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":3,"skipped":25,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:05:14.498: INFO: Only supported for providers [openstack] (not aws)
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 25 lines ...
Jul 10 08:04:48.733: INFO: PersistentVolumeClaim pvc-zrcns found but phase is Pending instead of Bound.
Jul 10 08:04:50.896: INFO: PersistentVolumeClaim pvc-zrcns found and phase=Bound (13.142941172s)
Jul 10 08:04:50.896: INFO: Waiting up to 3m0s for PersistentVolume local-rswt6 to have phase Bound
Jul 10 08:04:51.058: INFO: PersistentVolume local-rswt6 found and phase=Bound (162.329749ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jcvf
STEP: Creating a pod to test atomic-volume-subpath
Jul 10 08:04:51.543: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jcvf" in namespace "provisioning-7778" to be "Succeeded or Failed"
Jul 10 08:04:51.705: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Pending", Reason="", readiness=false. Elapsed: 161.442149ms
Jul 10 08:04:53.870: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326521272s
Jul 10 08:04:56.032: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Running", Reason="", readiness=true. Elapsed: 4.488599665s
Jul 10 08:04:58.194: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Running", Reason="", readiness=true. Elapsed: 6.65035402s
Jul 10 08:05:00.357: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Running", Reason="", readiness=true. Elapsed: 8.813702415s
Jul 10 08:05:02.520: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Running", Reason="", readiness=true. Elapsed: 10.976254872s
Jul 10 08:05:04.682: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Running", Reason="", readiness=true. Elapsed: 13.138456169s
Jul 10 08:05:06.844: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Running", Reason="", readiness=true. Elapsed: 15.300401608s
Jul 10 08:05:09.006: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Running", Reason="", readiness=true. Elapsed: 17.462784717s
Jul 10 08:05:11.172: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Running", Reason="", readiness=true. Elapsed: 19.628317867s
Jul 10 08:05:13.335: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Running", Reason="", readiness=true. Elapsed: 21.791897699s
Jul 10 08:05:15.499: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.955184131s
STEP: Saw pod success
Jul 10 08:05:15.499: INFO: Pod "pod-subpath-test-preprovisionedpv-jcvf" satisfied condition "Succeeded or Failed"
Jul 10 08:05:15.660: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-jcvf container test-container-subpath-preprovisionedpv-jcvf: <nil>
STEP: delete the pod
Jul 10 08:05:15.989: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jcvf to disappear
Jul 10 08:05:16.150: INFO: Pod pod-subpath-test-preprovisionedpv-jcvf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jcvf
Jul 10 08:05:16.151: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jcvf" in namespace "provisioning-7778"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:05:18.402: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 23 lines ...
Jul 10 08:05:14.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 10 08:05:15.561: INFO: Waiting up to 5m0s for pod "pod-bd18c89f-fdf6-42ff-a31d-de88a57df87b" in namespace "emptydir-761" to be "Succeeded or Failed"
Jul 10 08:05:15.722: INFO: Pod "pod-bd18c89f-fdf6-42ff-a31d-de88a57df87b": Phase="Pending", Reason="", readiness=false. Elapsed: 160.29819ms
Jul 10 08:05:17.883: INFO: Pod "pod-bd18c89f-fdf6-42ff-a31d-de88a57df87b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321338914s
STEP: Saw pod success
Jul 10 08:05:17.883: INFO: Pod "pod-bd18c89f-fdf6-42ff-a31d-de88a57df87b" satisfied condition "Succeeded or Failed"
Jul 10 08:05:18.044: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-bd18c89f-fdf6-42ff-a31d-de88a57df87b container test-container: <nil>
STEP: delete the pod
Jul 10 08:05:18.375: INFO: Waiting for pod pod-bd18c89f-fdf6-42ff-a31d-de88a57df87b to disappear
Jul 10 08:05:18.534: INFO: Pod pod-bd18c89f-fdf6-42ff-a31d-de88a57df87b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:05:18.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-761" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:05:18.887: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 158 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":2,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 123 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a failing command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:505
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command","total":-1,"completed":6,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:05:24.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul 10 08:05:25.866: INFO: Waiting up to 5m0s for pod "security-context-0da95e82-7524-4632-b708-b285cda5ef97" in namespace "security-context-9598" to be "Succeeded or Failed"
Jul 10 08:05:26.026: INFO: Pod "security-context-0da95e82-7524-4632-b708-b285cda5ef97": Phase="Pending", Reason="", readiness=false. Elapsed: 159.92896ms
Jul 10 08:05:28.186: INFO: Pod "security-context-0da95e82-7524-4632-b708-b285cda5ef97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.320448208s
STEP: Saw pod success
Jul 10 08:05:28.186: INFO: Pod "security-context-0da95e82-7524-4632-b708-b285cda5ef97" satisfied condition "Succeeded or Failed"
Jul 10 08:05:28.346: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod security-context-0da95e82-7524-4632-b708-b285cda5ef97 container test-container: <nil>
STEP: delete the pod
Jul 10 08:05:28.677: INFO: Waiting for pod security-context-0da95e82-7524-4632-b708-b285cda5ef97 to disappear
Jul 10 08:05:28.837: INFO: Pod security-context-0da95e82-7524-4632-b708-b285cda5ef97 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:05:28.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-9598" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":3,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:05:29.184: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:05:28.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9695" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":7,"skipped":45,"failed":0}
[BeforeEach] [sig-instrumentation] Events API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:05:29.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:05:32.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9788" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":8,"skipped":45,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-instrumentation] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:05:34.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9920" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":9,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:05:34.702: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 143 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":48,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 15 lines ...
Jul 10 08:05:34.226: INFO: PersistentVolumeClaim pvc-c2k8v found but phase is Pending instead of Bound.
Jul 10 08:05:36.387: INFO: PersistentVolumeClaim pvc-c2k8v found and phase=Bound (2.321497192s)
Jul 10 08:05:36.387: INFO: Waiting up to 3m0s for PersistentVolume local-dqxv4 to have phase Bound
Jul 10 08:05:36.547: INFO: PersistentVolume local-dqxv4 found and phase=Bound (159.866921ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-j4qg
STEP: Creating a pod to test subpath
Jul 10 08:05:37.029: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j4qg" in namespace "provisioning-8372" to be "Succeeded or Failed"
Jul 10 08:05:37.189: INFO: Pod "pod-subpath-test-preprovisionedpv-j4qg": Phase="Pending", Reason="", readiness=false. Elapsed: 160.325781ms
Jul 10 08:05:39.350: INFO: Pod "pod-subpath-test-preprovisionedpv-j4qg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321412052s
STEP: Saw pod success
Jul 10 08:05:39.350: INFO: Pod "pod-subpath-test-preprovisionedpv-j4qg" satisfied condition "Succeeded or Failed"
Jul 10 08:05:39.511: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-j4qg container test-container-subpath-preprovisionedpv-j4qg: <nil>
STEP: delete the pod
Jul 10 08:05:39.867: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j4qg to disappear
Jul 10 08:05:40.027: INFO: Pod pod-subpath-test-preprovisionedpv-j4qg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j4qg
Jul 10 08:05:40.027: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j4qg" in namespace "provisioning-8372"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:05:42.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9642" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":55,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a successful command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:500
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command","total":-1,"completed":5,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:05:43.863: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 392 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Jul 10 08:05:46.092: INFO: Waiting up to 5m0s for pod "metadata-volume-dd958464-bdda-4507-b7d9-f9aa323a5c44" in namespace "downward-api-9339" to be "Succeeded or Failed"
Jul 10 08:05:46.253: INFO: Pod "metadata-volume-dd958464-bdda-4507-b7d9-f9aa323a5c44": Phase="Pending", Reason="", readiness=false. Elapsed: 160.984401ms
Jul 10 08:05:48.415: INFO: Pod "metadata-volume-dd958464-bdda-4507-b7d9-f9aa323a5c44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.322483747s
STEP: Saw pod success
Jul 10 08:05:48.415: INFO: Pod "metadata-volume-dd958464-bdda-4507-b7d9-f9aa323a5c44" satisfied condition "Succeeded or Failed"
Jul 10 08:05:48.576: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod metadata-volume-dd958464-bdda-4507-b7d9-f9aa323a5c44 container client-container: <nil>
STEP: delete the pod
Jul 10 08:05:48.909: INFO: Waiting for pod metadata-volume-dd958464-bdda-4507-b7d9-f9aa323a5c44 to disappear
Jul 10 08:05:49.069: INFO: Pod metadata-volume-dd958464-bdda-4507-b7d9-f9aa323a5c44 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:05:49.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9339" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:05:49.419: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 83 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":5,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:05:48.745: INFO: >>> kubeConfig: /root/.kube/config
... skipping 127 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 10 08:05:50.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0aa2193-4efb-452a-8027-d8e7efc2f717" in namespace "projected-9633" to be "Succeeded or Failed"
Jul 10 08:05:50.706: INFO: Pod "downwardapi-volume-f0aa2193-4efb-452a-8027-d8e7efc2f717": Phase="Pending", Reason="", readiness=false. Elapsed: 161.200211ms
Jul 10 08:05:52.868: INFO: Pod "downwardapi-volume-f0aa2193-4efb-452a-8027-d8e7efc2f717": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.32355619s
STEP: Saw pod success
Jul 10 08:05:52.869: INFO: Pod "downwardapi-volume-f0aa2193-4efb-452a-8027-d8e7efc2f717" satisfied condition "Succeeded or Failed"
Jul 10 08:05:53.030: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod downwardapi-volume-f0aa2193-4efb-452a-8027-d8e7efc2f717 container client-container: <nil>
STEP: delete the pod
Jul 10 08:05:53.362: INFO: Waiting for pod downwardapi-volume-f0aa2193-4efb-452a-8027-d8e7efc2f717 to disappear
Jul 10 08:05:53.523: INFO: Pod downwardapi-volume-f0aa2193-4efb-452a-8027-d8e7efc2f717 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:05:53.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9633" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 177 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":3,"skipped":44,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:375
Jul 10 08:05:50.770: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 10 08:05:50.932: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-tj7v
STEP: Creating a pod to test subpath
Jul 10 08:05:51.119: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-tj7v" in namespace "provisioning-2930" to be "Succeeded or Failed"
Jul 10 08:05:51.279: INFO: Pod "pod-subpath-test-inlinevolume-tj7v": Phase="Pending", Reason="", readiness=false. Elapsed: 160.042232ms
Jul 10 08:05:53.442: INFO: Pod "pod-subpath-test-inlinevolume-tj7v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32247083s
Jul 10 08:05:55.602: INFO: Pod "pod-subpath-test-inlinevolume-tj7v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.48312178s
STEP: Saw pod success
Jul 10 08:05:55.602: INFO: Pod "pod-subpath-test-inlinevolume-tj7v" satisfied condition "Succeeded or Failed"
Jul 10 08:05:55.763: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-tj7v container test-container-subpath-inlinevolume-tj7v: <nil>
STEP: delete the pod
Jul 10 08:05:56.089: INFO: Waiting for pod pod-subpath-test-inlinevolume-tj7v to disappear
Jul 10 08:05:56.249: INFO: Pod pod-subpath-test-inlinevolume-tj7v no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-tj7v
Jul 10 08:05:56.249: INFO: Deleting pod "pod-subpath-test-inlinevolume-tj7v" in namespace "provisioning-2930"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:375
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:59.719 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":4,"skipped":17,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:06:02.748: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 420 lines ...
[It] should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating server pod server in namespace prestop-6064
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6064
STEP: Deleting pre-stop pod
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
Jul 10 08:06:10.067: FAIL: validating pre-stop.
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 21 lines ...
Jul 10 08:06:10.400: INFO: At 2021-07-10 08:04:23 +0000 UTC - event for server: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} Started: Started container agnhost-container
Jul 10 08:06:10.400: INFO: At 2021-07-10 08:04:27 +0000 UTC - event for tester: {default-scheduler } Scheduled: Successfully assigned prestop-6064/tester to ip-172-20-37-88.ap-northeast-2.compute.internal
Jul 10 08:06:10.400: INFO: At 2021-07-10 08:04:27 +0000 UTC - event for tester: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Jul 10 08:06:10.400: INFO: At 2021-07-10 08:04:27 +0000 UTC - event for tester: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Created: Created container tester
Jul 10 08:06:10.400: INFO: At 2021-07-10 08:04:27 +0000 UTC - event for tester: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Started: Started container tester
Jul 10 08:06:10.400: INFO: At 2021-07-10 08:04:30 +0000 UTC - event for tester: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Killing: Stopping container tester
Jul 10 08:06:10.400: INFO: At 2021-07-10 08:05:02 +0000 UTC - event for tester: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} FailedPreStopHook: Exec lifecycle hook ([wget -O- --post-data={"Source": "prestop"} http://100.96.4.17:8080/write]) for Container "tester" in Pod "tester_prestop-6064(3a35a989-77d2-4a14-8091-365da4e370a5)" failed - error: command 'wget -O- --post-data={"Source": "prestop"} http://100.96.4.17:8080/write' exited with 137: Connecting to 100.96.4.17:8080 (100.96.4.17:8080)
, message: "Connecting to 100.96.4.17:8080 (100.96.4.17:8080)\n"
Jul 10 08:06:10.400: INFO: At 2021-07-10 08:06:10 +0000 UTC - event for server: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} Killing: Stopping container agnhost-container
Jul 10 08:06:10.561: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul 10 08:06:10.562: INFO: 
Jul 10 08:06:10.725: INFO: 
Logging node info for node ip-172-20-35-182.ap-northeast-2.compute.internal
... skipping 173 lines ...
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should call prestop when killing a pod  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 10 08:06:10.067: validating pre-stop.
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151
------------------------------
{"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":1,"skipped":7,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-windows] Hybrid cluster network
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jul 10 08:06:16.502: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 119 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul 10 08:05:55.614: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 10 08:05:55.773: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-94c2
STEP: Creating a pod to test atomic-volume-subpath
Jul 10 08:05:55.933: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-94c2" in namespace "provisioning-9953" to be "Succeeded or Failed"
Jul 10 08:05:56.094: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Pending", Reason="", readiness=false. Elapsed: 160.774541ms
Jul 10 08:05:58.252: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319041201s
Jul 10 08:06:00.410: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Running", Reason="", readiness=true. Elapsed: 4.476695852s
Jul 10 08:06:02.567: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Running", Reason="", readiness=true. Elapsed: 6.634595274s
Jul 10 08:06:04.726: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Running", Reason="", readiness=true. Elapsed: 8.793333686s
Jul 10 08:06:06.884: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Running", Reason="", readiness=true. Elapsed: 10.951201729s
Jul 10 08:06:09.042: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Running", Reason="", readiness=true. Elapsed: 13.108884346s
Jul 10 08:06:11.201: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Running", Reason="", readiness=true. Elapsed: 15.267975861s
Jul 10 08:06:13.359: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Running", Reason="", readiness=true. Elapsed: 17.425816429s
Jul 10 08:06:15.525: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Running", Reason="", readiness=true. Elapsed: 19.592503416s
Jul 10 08:06:17.684: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Running", Reason="", readiness=true. Elapsed: 21.750937362s
Jul 10 08:06:19.842: INFO: Pod "pod-subpath-test-inlinevolume-94c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.909099695s
STEP: Saw pod success
Jul 10 08:06:19.842: INFO: Pod "pod-subpath-test-inlinevolume-94c2" satisfied condition "Succeeded or Failed"
Jul 10 08:06:19.999: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-94c2 container test-container-subpath-inlinevolume-94c2: <nil>
STEP: delete the pod
Jul 10 08:06:20.325: INFO: Waiting for pod pod-subpath-test-inlinevolume-94c2 to disappear
Jul 10 08:06:20.482: INFO: Pod pod-subpath-test-inlinevolume-94c2 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-94c2
Jul 10 08:06:20.482: INFO: Deleting pod "pod-subpath-test-inlinevolume-94c2" in namespace "provisioning-9953"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":49,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 65 lines ...
Jul 10 08:05:28.498: INFO: PersistentVolumeClaim csi-hostpathtx9q6 found but phase is Pending instead of Bound.
Jul 10 08:05:30.659: INFO: PersistentVolumeClaim csi-hostpathtx9q6 found but phase is Pending instead of Bound.
Jul 10 08:05:32.819: INFO: PersistentVolumeClaim csi-hostpathtx9q6 found but phase is Pending instead of Bound.
Jul 10 08:05:34.980: INFO: PersistentVolumeClaim csi-hostpathtx9q6 found and phase=Bound (17.445747043s)
STEP: Creating pod pod-subpath-test-dynamicpv-5lzp
STEP: Creating a pod to test subpath
Jul 10 08:05:35.468: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5lzp" in namespace "provisioning-2259" to be "Succeeded or Failed"
Jul 10 08:05:35.628: INFO: Pod "pod-subpath-test-dynamicpv-5lzp": Phase="Pending", Reason="", readiness=false. Elapsed: 159.842081ms
Jul 10 08:05:37.789: INFO: Pod "pod-subpath-test-dynamicpv-5lzp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320285923s
Jul 10 08:05:39.954: INFO: Pod "pod-subpath-test-dynamicpv-5lzp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.485553355s
STEP: Saw pod success
Jul 10 08:05:39.954: INFO: Pod "pod-subpath-test-dynamicpv-5lzp" satisfied condition "Succeeded or Failed"
Jul 10 08:05:40.114: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-5lzp container test-container-volume-dynamicpv-5lzp: <nil>
STEP: delete the pod
Jul 10 08:05:40.442: INFO: Waiting for pod pod-subpath-test-dynamicpv-5lzp to disappear
Jul 10 08:05:40.602: INFO: Pod pod-subpath-test-dynamicpv-5lzp no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-5lzp
Jul 10 08:05:40.602: INFO: Deleting pod "pod-subpath-test-dynamicpv-5lzp" in namespace "provisioning-2259"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] Metadata Concealment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 173 lines ...
• [SLOW TEST:9.514 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":6,"skipped":60,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:06:33.625: INFO: Only supported for providers [azure] (not aws)
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":70,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:06:39.270: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Jul 10 08:06:39.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Jul 10 08:06:40.311: INFO: Waiting up to 5m0s for pod "security-context-f4d5e375-7e89-409d-949b-df990260a9dd" in namespace "security-context-3823" to be "Succeeded or Failed"
Jul 10 08:06:40.474: INFO: Pod "security-context-f4d5e375-7e89-409d-949b-df990260a9dd": Phase="Pending", Reason="", readiness=false. Elapsed: 163.466583ms
Jul 10 08:06:42.639: INFO: Pod "security-context-f4d5e375-7e89-409d-949b-df990260a9dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328228518s
Jul 10 08:06:44.804: INFO: Pod "security-context-f4d5e375-7e89-409d-949b-df990260a9dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.493052114s
STEP: Saw pod success
Jul 10 08:06:44.804: INFO: Pod "security-context-f4d5e375-7e89-409d-949b-df990260a9dd" satisfied condition "Succeeded or Failed"
Jul 10 08:06:44.967: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod security-context-f4d5e375-7e89-409d-949b-df990260a9dd container test-container: <nil>
STEP: delete the pod
Jul 10 08:06:45.305: INFO: Waiting for pod security-context-f4d5e375-7e89-409d-949b-df990260a9dd to disappear
Jul 10 08:06:45.468: INFO: Pod security-context-f4d5e375-7e89-409d-949b-df990260a9dd no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.474 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":7,"skipped":84,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:06:45.810: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 33 lines ...
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3801 to expose endpoints map[pod1:[80]]
Jul 10 08:04:18.866: INFO: successfully validated that service endpoint-test2 in namespace services-3801 exposes endpoints map[pod1:[80]]
STEP: Checking if the Service forwards traffic to pod1
Jul 10 08:04:18.866: INFO: Creating new exec pod
Jul 10 08:04:22.371: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:04:29.110: INFO: rc: 1
Jul 10 08:04:29.110: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:04:30.112: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:04:36.760: INFO: rc: 1
Jul 10 08:04:36.760: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:04:37.111: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:04:43.790: INFO: rc: 1
Jul 10 08:04:43.790: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:04:44.111: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:04:50.839: INFO: rc: 1
Jul 10 08:04:50.839: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:04:51.111: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:04:57.812: INFO: rc: 1
Jul 10 08:04:57.812: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ + nc -vecho -t hostName -w
 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:04:58.111: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:05:04.824: INFO: rc: 1
Jul 10 08:05:04.824: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ nc -v -t -w 2 endpoint-test2 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:05.111: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:05:11.797: INFO: rc: 1
Jul 10 08:05:11.797: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:12.111: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:05:18.806: INFO: rc: 1
Jul 10 08:05:18.806: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:19.112: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:05:25.819: INFO: rc: 1
Jul 10 08:05:25.819: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:26.112: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:05:32.835: INFO: rc: 1
Jul 10 08:05:32.835: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:33.112: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:05:39.860: INFO: rc: 1
Jul 10 08:05:39.860: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:40.111: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:05:46.777: INFO: rc: 1
Jul 10 08:05:46.777: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:47.112: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:05:53.701: INFO: rc: 1
Jul 10 08:05:53.701: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:54.112: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:06:00.729: INFO: rc: 1
Jul 10 08:06:00.729: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:01.111: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:06:07.747: INFO: rc: 1
Jul 10 08:06:07.747: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ nc -v -t -w 2 endpoint-test2 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:08.112: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:06:14.703: INFO: rc: 1
Jul 10 08:06:14.703: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ nc -v -t -w 2 endpoint-test2 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:15.111: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:06:21.723: INFO: rc: 1
Jul 10 08:06:21.723: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:22.112: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:06:28.760: INFO: rc: 1
Jul 10 08:06:28.760: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:29.111: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:06:35.742: INFO: rc: 1
Jul 10 08:06:35.742: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:35.743: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jul 10 08:06:42.346: INFO: rc: 1
Jul 10 08:06:42.346: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3801 exec execpodzqscw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:42.346: FAIL: Unexpected error:
    <*errors.errorString | 0xc000316160>: {
        s: "service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol
occurred

... skipping 210 lines ...
• Failure [155.577 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 10 08:06:42.346: Unexpected error:
      <*errors.errorString | 0xc000316160>: {
          s: "service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:815
------------------------------
{"msg":"FAILED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":1,"skipped":5,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Jul 10 08:06:50.479: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul 10 08:06:50.479: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-2xsl
STEP: Creating a pod to test subpath
Jul 10 08:06:50.664: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-2xsl" in namespace "provisioning-8906" to be "Succeeded or Failed"
Jul 10 08:06:50.828: INFO: Pod "pod-subpath-test-inlinevolume-2xsl": Phase="Pending", Reason="", readiness=false. Elapsed: 163.196452ms
Jul 10 08:06:52.992: INFO: Pod "pod-subpath-test-inlinevolume-2xsl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.327377953s
STEP: Saw pod success
Jul 10 08:06:52.992: INFO: Pod "pod-subpath-test-inlinevolume-2xsl" satisfied condition "Succeeded or Failed"
Jul 10 08:06:53.155: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-2xsl container test-container-volume-inlinevolume-2xsl: <nil>
STEP: delete the pod
Jul 10 08:06:53.488: INFO: Waiting for pod pod-subpath-test-inlinevolume-2xsl to disappear
Jul 10 08:06:53.651: INFO: Pod pod-subpath-test-inlinevolume-2xsl no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-2xsl
Jul 10 08:06:53.651: INFO: Deleting pod "pod-subpath-test-inlinevolume-2xsl" in namespace "provisioning-8906"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:06:53.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8906" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":6,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:06:54.319: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 36 lines ...
STEP: Registering the custom resource webhook via the AdmissionRegistration API
Jul 10 08:06:00.837: INFO: Waiting for webhook configuration to be ready...
Jul 10 08:06:11.260: INFO: Waiting for webhook configuration to be ready...
Jul 10 08:06:21.662: INFO: Waiting for webhook configuration to be ready...
Jul 10 08:06:32.060: INFO: Waiting for webhook configuration to be ready...
Jul 10 08:06:42.382: INFO: Waiting for webhook configuration to be ready...
Jul 10 08:06:42.383: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0001c8250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 396 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 10 08:06:42.383: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0001c8250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1749
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":5,"skipped":78,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:06:55.290: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 50 lines ...
STEP: creating replication controller externalname-service in namespace services-911
I0710 08:04:21.496283   12919 runners.go:190] Created replication controller with name: externalname-service, namespace: services-911, replica count: 2
I0710 08:04:24.697634   12919 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 10 08:04:24.697: INFO: Creating new exec pod
Jul 10 08:04:28.184: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:04:34.918: INFO: rc: 1
Jul 10 08:04:34.919: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:04:35.919: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:04:42.646: INFO: rc: 1
Jul 10 08:04:42.646: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:04:42.919: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:04:49.615: INFO: rc: 1
Jul 10 08:04:49.615: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:04:49.920: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:04:56.663: INFO: rc: 1
Jul 10 08:04:56.663: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:04:56.920: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:05:03.639: INFO: rc: 1
Jul 10 08:05:03.639: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:03.920: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:05:10.634: INFO: rc: 1
Jul 10 08:05:10.635: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:10.920: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:05:17.647: INFO: rc: 1
Jul 10 08:05:17.647: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:17.919: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:05:24.595: INFO: rc: 1
Jul 10 08:05:24.595: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:24.919: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:05:31.663: INFO: rc: 1
Jul 10 08:05:31.663: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:31.920: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:05:38.636: INFO: rc: 1
Jul 10 08:05:38.636: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:38.920: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:05:45.530: INFO: rc: 1
Jul 10 08:05:45.531: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:45.919: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:05:52.524: INFO: rc: 1
Jul 10 08:05:52.524: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:52.920: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:05:59.561: INFO: rc: 1
Jul 10 08:05:59.561: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:05:59.920: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:06:06.512: INFO: rc: 1
Jul 10 08:06:06.512: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:06.919: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:06:13.603: INFO: rc: 1
Jul 10 08:06:13.603: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:13.920: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:06:20.521: INFO: rc: 1
Jul 10 08:06:20.521: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:20.920: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:06:27.539: INFO: rc: 1
Jul 10 08:06:27.539: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:27.920: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:06:34.601: INFO: rc: 1
Jul 10 08:06:34.601: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:34.919: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:06:41.543: INFO: rc: 1
Jul 10 08:06:41.543: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:41.543: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul 10 08:06:48.195: INFO: rc: 1
Jul 10 08:06:48.195: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-911 exec execpod7xdfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:06:48.196: FAIL: Unexpected error:
    <*errors.errorString | 0xc003634180>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
occurred

... skipping 222 lines ...
• Failure [155.994 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 10 08:06:48.196: Unexpected error:
      <*errors.errorString | 0xc003634180>: {
          s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1333
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":0,"skipped":6,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:06:56.075: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-7c544ea1-c1be-48ed-8257-7efda424e4e4
STEP: Creating a pod to test consume configMaps
Jul 10 08:06:56.443: INFO: Waiting up to 5m0s for pod "pod-configmaps-21ae34e8-6b34-44e8-8e69-1507868ef782" in namespace "configmap-6753" to be "Succeeded or Failed"
Jul 10 08:06:56.602: INFO: Pod "pod-configmaps-21ae34e8-6b34-44e8-8e69-1507868ef782": Phase="Pending", Reason="", readiness=false. Elapsed: 159.766773ms
Jul 10 08:06:58.763: INFO: Pod "pod-configmaps-21ae34e8-6b34-44e8-8e69-1507868ef782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.320302874s
STEP: Saw pod success
Jul 10 08:06:58.763: INFO: Pod "pod-configmaps-21ae34e8-6b34-44e8-8e69-1507868ef782" satisfied condition "Succeeded or Failed"
Jul 10 08:06:58.923: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-configmaps-21ae34e8-6b34-44e8-8e69-1507868ef782 container agnhost-container: <nil>
STEP: delete the pod
Jul 10 08:06:59.250: INFO: Waiting for pod pod-configmaps-21ae34e8-6b34-44e8-8e69-1507868ef782 to disappear
Jul 10 08:06:59.410: INFO: Pod pod-configmaps-21ae34e8-6b34-44e8-8e69-1507868ef782 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:06:59.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6753" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":81,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...
• [SLOW TEST:63.801 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237
------------------------------
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":7,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:07:00.736: INFO: Only supported for providers [gce gke] (not aws)
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:07:01.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9656" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":7,"skipped":86,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:07:01.869: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 163 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:07:05.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-4692" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":8,"skipped":105,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:07:06.029: INFO: Only supported for providers [azure] (not aws)
... skipping 148 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023
    unlimited
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":5,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:07:08.080: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating replication controller my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81
Jul 10 08:04:05.764: INFO: Pod name my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81: Found 1 pods out of 1
Jul 10 08:04:05.764: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81" are running
Jul 10 08:04:16.094: INFO: Pod "my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2" is running (conditions: [{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-10 08:04:05 +0000 UTC Reason: Message:}])
Jul 10 08:04:16.094: INFO: Trying to dial the pod
Jul 10 08:04:51.597: INFO: Controller my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81: Failed to GET from replica 1 [my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2]: the server is currently unable to handle the request (get pods my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761501045, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul 10 08:05:26.596: INFO: Controller my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81: Failed to GET from replica 1 [my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2]: the server is currently unable to handle the request (get pods my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761501045, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul 10 08:06:01.574: INFO: Controller my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81: Failed to GET from replica 1 [my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2]: the server is currently unable to handle the request (get pods my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761501045, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul 10 08:06:36.582: INFO: Controller my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81: Failed to GET from replica 1 [my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2]: the server is currently unable to handle the request (get pods my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761501045, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul 10 08:07:07.069: INFO: Controller my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81: Failed to GET from replica 1 [my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2]: the server is currently unable to handle the request (get pods my-hostname-basic-bff81ef6-abfd-4594-b76c-fdf4cc669a81-x5wc2)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761501045, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul 10 08:07:07.070: FAIL: Did not get expected responses within the timeout period of 120.00 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func7.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65 +0x57
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000955800)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 201 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 10 08:07:07.070: Did not get expected responses within the timeout period of 120.00 seconds.

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65
------------------------------
{"msg":"FAILED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":0,"skipped":16,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 24 lines ...
Jul 10 08:07:05.137: INFO: PersistentVolumeClaim pvc-qzbrw found but phase is Pending instead of Bound.
Jul 10 08:07:07.301: INFO: PersistentVolumeClaim pvc-qzbrw found and phase=Bound (15.316096753s)
Jul 10 08:07:07.301: INFO: Waiting up to 3m0s for PersistentVolume local-qt7bp to have phase Bound
Jul 10 08:07:07.465: INFO: PersistentVolume local-qt7bp found and phase=Bound (163.306433ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-8thm
STEP: Creating a pod to test exec-volume-test
Jul 10 08:07:07.957: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-8thm" in namespace "volume-1285" to be "Succeeded or Failed"
Jul 10 08:07:08.120: INFO: Pod "exec-volume-test-preprovisionedpv-8thm": Phase="Pending", Reason="", readiness=false. Elapsed: 163.092533ms
Jul 10 08:07:10.284: INFO: Pod "exec-volume-test-preprovisionedpv-8thm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326942139s
Jul 10 08:07:12.448: INFO: Pod "exec-volume-test-preprovisionedpv-8thm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.491207115s
STEP: Saw pod success
Jul 10 08:07:12.448: INFO: Pod "exec-volume-test-preprovisionedpv-8thm" satisfied condition "Succeeded or Failed"
Jul 10 08:07:12.612: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-8thm container exec-container-preprovisionedpv-8thm: <nil>
STEP: delete the pod
Jul 10 08:07:12.946: INFO: Waiting for pod exec-volume-test-preprovisionedpv-8thm to disappear
Jul 10 08:07:13.110: INFO: Pod exec-volume-test-preprovisionedpv-8thm no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-8thm
Jul 10 08:07:13.110: INFO: Deleting pod "exec-volume-test-preprovisionedpv-8thm" in namespace "volume-1285"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":86,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:992
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1037
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":8,"skipped":54,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Jul 10 08:07:18.669: INFO: PersistentVolumeClaim pvc-92fzp found but phase is Pending instead of Bound.
Jul 10 08:07:20.828: INFO: PersistentVolumeClaim pvc-92fzp found and phase=Bound (8.797445094s)
Jul 10 08:07:20.828: INFO: Waiting up to 3m0s for PersistentVolume local-7r9dd to have phase Bound
Jul 10 08:07:20.987: INFO: PersistentVolume local-7r9dd found and phase=Bound (159.014804ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qx4v
STEP: Creating a pod to test subpath
Jul 10 08:07:21.466: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qx4v" in namespace "provisioning-8746" to be "Succeeded or Failed"
Jul 10 08:07:21.625: INFO: Pod "pod-subpath-test-preprovisionedpv-qx4v": Phase="Pending", Reason="", readiness=false. Elapsed: 159.513203ms
Jul 10 08:07:23.786: INFO: Pod "pod-subpath-test-preprovisionedpv-qx4v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319852764s
Jul 10 08:07:25.946: INFO: Pod "pod-subpath-test-preprovisionedpv-qx4v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.479972545s
STEP: Saw pod success
Jul 10 08:07:25.946: INFO: Pod "pod-subpath-test-preprovisionedpv-qx4v" satisfied condition "Succeeded or Failed"
Jul 10 08:07:26.105: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-qx4v container test-container-subpath-preprovisionedpv-qx4v: <nil>
STEP: delete the pod
Jul 10 08:07:26.431: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qx4v to disappear
Jul 10 08:07:26.590: INFO: Pod pod-subpath-test-preprovisionedpv-qx4v no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qx4v
Jul 10 08:07:26.590: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qx4v" in namespace "provisioning-8746"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:375
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":123,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:36.775 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":1,"skipped":9,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:07:32.885: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:992
    should create/apply a valid CR for CRD with validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1011
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":9,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 35 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":87,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:07:36.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6317" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":10,"skipped":91,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:07:32.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-30614d4c-609a-4bde-a135-1a240c04e834
STEP: Creating a pod to test consume secrets
Jul 10 08:07:34.049: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b59b2f78-8bb3-4341-bdbe-3d0c27a7b752" in namespace "projected-6578" to be "Succeeded or Failed"
Jul 10 08:07:34.210: INFO: Pod "pod-projected-secrets-b59b2f78-8bb3-4341-bdbe-3d0c27a7b752": Phase="Pending", Reason="", readiness=false. Elapsed: 160.792754ms
Jul 10 08:07:36.371: INFO: Pod "pod-projected-secrets-b59b2f78-8bb3-4341-bdbe-3d0c27a7b752": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321889601s
STEP: Saw pod success
Jul 10 08:07:36.372: INFO: Pod "pod-projected-secrets-b59b2f78-8bb3-4341-bdbe-3d0c27a7b752" satisfied condition "Succeeded or Failed"
Jul 10 08:07:36.532: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-projected-secrets-b59b2f78-8bb3-4341-bdbe-3d0c27a7b752 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 10 08:07:36.860: INFO: Waiting for pod pod-projected-secrets-b59b2f78-8bb3-4341-bdbe-3d0c27a7b752 to disappear
Jul 10 08:07:37.020: INFO: Pod pod-projected-secrets-b59b2f78-8bb3-4341-bdbe-3d0c27a7b752 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:07:37.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6578" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:07:37.358: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 76 lines ...
• [SLOW TEST:8.616 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":3,"skipped":18,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:07:46.066: INFO: Only supported for providers [gce gke] (not aws)
... skipping 94 lines ...
• [SLOW TEST:95.646 seconds]
[sig-storage] Mounted volume expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":2,"skipped":20,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:07:52.250: INFO: Only supported for providers [gce gke] (not aws)
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:07:55.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1812" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":3,"skipped":22,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:07:56.010: INFO: Only supported for providers [vsphere] (not aws)
... skipping 101 lines ...
Jul 10 08:05:35.633: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-875687k9m
STEP: creating a claim
Jul 10 08:05:35.795: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-nt9x
STEP: Creating a pod to test subpath
Jul 10 08:05:36.280: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-nt9x" in namespace "provisioning-8756" to be "Succeeded or Failed"
Jul 10 08:05:36.441: INFO: Pod "pod-subpath-test-dynamicpv-nt9x": Phase="Pending", Reason="", readiness=false. Elapsed: 160.866191ms
Jul 10 08:05:38.603: INFO: Pod "pod-subpath-test-dynamicpv-nt9x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322657142s
Jul 10 08:05:40.769: INFO: Pod "pod-subpath-test-dynamicpv-nt9x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488497715s
Jul 10 08:05:42.930: INFO: Pod "pod-subpath-test-dynamicpv-nt9x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.650055699s
Jul 10 08:05:45.092: INFO: Pod "pod-subpath-test-dynamicpv-nt9x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.811482514s
Jul 10 08:05:47.254: INFO: Pod "pod-subpath-test-dynamicpv-nt9x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.973773329s
... skipping 35 lines ...
Jul 10 08:07:05.102: INFO: Pod "pod-subpath-test-dynamicpv-nt9x": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.821918636s
Jul 10 08:07:07.264: INFO: Pod "pod-subpath-test-dynamicpv-nt9x": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.984045329s
Jul 10 08:07:09.426: INFO: Pod "pod-subpath-test-dynamicpv-nt9x": Phase="Pending", Reason="", readiness=false. Elapsed: 1m33.145587085s
Jul 10 08:07:11.588: INFO: Pod "pod-subpath-test-dynamicpv-nt9x": Phase="Pending", Reason="", readiness=false. Elapsed: 1m35.307511252s
Jul 10 08:07:13.750: INFO: Pod "pod-subpath-test-dynamicpv-nt9x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m37.469844859s
STEP: Saw pod success
Jul 10 08:07:13.750: INFO: Pod "pod-subpath-test-dynamicpv-nt9x" satisfied condition "Succeeded or Failed"
Jul 10 08:07:13.915: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-nt9x container test-container-volume-dynamicpv-nt9x: <nil>
STEP: delete the pod
Jul 10 08:07:14.245: INFO: Waiting for pod pod-subpath-test-dynamicpv-nt9x to disappear
Jul 10 08:07:14.405: INFO: Pod pod-subpath-test-dynamicpv-nt9x no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-nt9x
Jul 10 08:07:14.406: INFO: Deleting pod "pod-subpath-test-dynamicpv-nt9x" in namespace "provisioning-8756"
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":10,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:07:57.186: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 95 lines ...
• [SLOW TEST:27.981 seconds]
[sig-api-machinery] Servers with support for API chunking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":10,"skipped":59,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:02.180: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 67 lines ...
Jul 10 08:07:56.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 10 08:07:57.103: INFO: Waiting up to 5m0s for pod "pod-55e3cfca-5a18-41c4-a115-56eb190b2d9e" in namespace "emptydir-7603" to be "Succeeded or Failed"
Jul 10 08:07:57.263: INFO: Pod "pod-55e3cfca-5a18-41c4-a115-56eb190b2d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 159.854315ms
Jul 10 08:07:59.423: INFO: Pod "pod-55e3cfca-5a18-41c4-a115-56eb190b2d9e": Phase="Running", Reason="", readiness=true. Elapsed: 2.319610265s
Jul 10 08:08:01.583: INFO: Pod "pod-55e3cfca-5a18-41c4-a115-56eb190b2d9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.479942149s
STEP: Saw pod success
Jul 10 08:08:01.583: INFO: Pod "pod-55e3cfca-5a18-41c4-a115-56eb190b2d9e" satisfied condition "Succeeded or Failed"
Jul 10 08:08:01.742: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-55e3cfca-5a18-41c4-a115-56eb190b2d9e container test-container: <nil>
STEP: delete the pod
Jul 10 08:08:02.068: INFO: Waiting for pod pod-55e3cfca-5a18-41c4-a115-56eb190b2d9e to disappear
Jul 10 08:08:02.229: INFO: Pod pod-55e3cfca-5a18-41c4-a115-56eb190b2d9e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.408 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":43,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:07:57.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 10 08:07:58.179: INFO: Waiting up to 5m0s for pod "pod-5db1dff3-7338-4540-bb16-fd2bf151e2a3" in namespace "emptydir-1212" to be "Succeeded or Failed"
Jul 10 08:07:58.341: INFO: Pod "pod-5db1dff3-7338-4540-bb16-fd2bf151e2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 161.383973ms
Jul 10 08:08:00.503: INFO: Pod "pod-5db1dff3-7338-4540-bb16-fd2bf151e2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323215127s
Jul 10 08:08:02.668: INFO: Pod "pod-5db1dff3-7338-4540-bb16-fd2bf151e2a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.48867615s
STEP: Saw pod success
Jul 10 08:08:02.668: INFO: Pod "pod-5db1dff3-7338-4540-bb16-fd2bf151e2a3" satisfied condition "Succeeded or Failed"
Jul 10 08:08:02.830: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-5db1dff3-7338-4540-bb16-fd2bf151e2a3 container test-container: <nil>
STEP: delete the pod
Jul 10 08:08:03.157: INFO: Waiting for pod pod-5db1dff3-7338-4540-bb16-fd2bf151e2a3 to disappear
Jul 10 08:08:03.318: INFO: Pod pod-5db1dff3-7338-4540-bb16-fd2bf151e2a3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.434 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 174 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":55,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:03.856: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:08:03.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-2471" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":5,"skipped":51,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:03.903: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 146 lines ...
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
Jul 10 08:07:11.008: INFO: Waiting for webhook configuration to be ready...
Jul 10 08:07:21.437: INFO: Waiting for webhook configuration to be ready...
Jul 10 08:07:31.843: INFO: Waiting for webhook configuration to be ready...
Jul 10 08:07:42.238: INFO: Waiting for webhook configuration to be ready...
Jul 10 08:07:52.571: INFO: Waiting for webhook configuration to be ready...
Jul 10 08:07:52.572: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0001c8250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 419 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 10 08:07:52.572: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0001c8250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1055
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":2,"skipped":9,"failed":2,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:08:02.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul 10 08:08:03.230: INFO: Waiting up to 5m0s for pod "downward-api-0338046a-c297-47c5-9c18-375b063c5d53" in namespace "downward-api-2235" to be "Succeeded or Failed"
Jul 10 08:08:03.389: INFO: Pod "downward-api-0338046a-c297-47c5-9c18-375b063c5d53": Phase="Pending", Reason="", readiness=false. Elapsed: 159.513135ms
Jul 10 08:08:05.550: INFO: Pod "downward-api-0338046a-c297-47c5-9c18-375b063c5d53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.32075772s
STEP: Saw pod success
Jul 10 08:08:05.551: INFO: Pod "downward-api-0338046a-c297-47c5-9c18-375b063c5d53" satisfied condition "Succeeded or Failed"
Jul 10 08:08:05.711: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod downward-api-0338046a-c297-47c5-9c18-375b063c5d53 container dapi-container: <nil>
STEP: delete the pod
Jul 10 08:08:06.037: INFO: Waiting for pod downward-api-0338046a-c297-47c5-9c18-375b063c5d53 to disappear
Jul 10 08:08:06.197: INFO: Pod downward-api-0338046a-c297-47c5-9c18-375b063c5d53 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:08:06.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2235" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":73,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":6,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:05:52.499: INFO: >>> kubeConfig: /root/.kube/config
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":7,"skipped":47,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:08:04.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul 10 08:08:04.981: INFO: Waiting up to 5m0s for pod "downward-api-acb51039-8261-497f-898d-ac17dd57e217" in namespace "downward-api-9100" to be "Succeeded or Failed"
Jul 10 08:08:05.147: INFO: Pod "downward-api-acb51039-8261-497f-898d-ac17dd57e217": Phase="Pending", Reason="", readiness=false. Elapsed: 165.547575ms
Jul 10 08:08:07.307: INFO: Pod "downward-api-acb51039-8261-497f-898d-ac17dd57e217": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.3260833s
STEP: Saw pod success
Jul 10 08:08:07.307: INFO: Pod "downward-api-acb51039-8261-497f-898d-ac17dd57e217" satisfied condition "Succeeded or Failed"
Jul 10 08:08:07.467: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod downward-api-acb51039-8261-497f-898d-ac17dd57e217 container dapi-container: <nil>
STEP: delete the pod
Jul 10 08:08:07.799: INFO: Waiting for pod downward-api-acb51039-8261-497f-898d-ac17dd57e217 to disappear
Jul 10 08:08:07.960: INFO: Pod downward-api-acb51039-8261-497f-898d-ac17dd57e217 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 9 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-b870dcb5-6b56-4c48-80cb-3d1b0451a624
STEP: Creating a pod to test consume secrets
Jul 10 08:08:05.001: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f67df73b-00ef-45e1-baac-41ddcc6517b9" in namespace "projected-2855" to be "Succeeded or Failed"
Jul 10 08:08:05.161: INFO: Pod "pod-projected-secrets-f67df73b-00ef-45e1-baac-41ddcc6517b9": Phase="Pending", Reason="", readiness=false. Elapsed: 159.546195ms
Jul 10 08:08:07.319: INFO: Pod "pod-projected-secrets-f67df73b-00ef-45e1-baac-41ddcc6517b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.318075661s
STEP: Saw pod success
Jul 10 08:08:07.319: INFO: Pod "pod-projected-secrets-f67df73b-00ef-45e1-baac-41ddcc6517b9" satisfied condition "Succeeded or Failed"
Jul 10 08:08:07.476: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-projected-secrets-f67df73b-00ef-45e1-baac-41ddcc6517b9 container secret-volume-test: <nil>
STEP: delete the pod
Jul 10 08:08:07.804: INFO: Waiting for pod pod-projected-secrets-f67df73b-00ef-45e1-baac-41ddcc6517b9 to disappear
Jul 10 08:08:07.963: INFO: Pod pod-projected-secrets-f67df73b-00ef-45e1-baac-41ddcc6517b9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:08:07.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2855" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":67,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:08.294: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:08.294: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 158 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 10 08:08:08.159: INFO: Waiting up to 5m0s for pod "downwardapi-volume-643440d2-7833-490d-80d9-b17599723443" in namespace "downward-api-2212" to be "Succeeded or Failed"
Jul 10 08:08:08.322: INFO: Pod "downwardapi-volume-643440d2-7833-490d-80d9-b17599723443": Phase="Pending", Reason="", readiness=false. Elapsed: 162.637215ms
Jul 10 08:08:10.485: INFO: Pod "downwardapi-volume-643440d2-7833-490d-80d9-b17599723443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32568476s
Jul 10 08:08:12.649: INFO: Pod "downwardapi-volume-643440d2-7833-490d-80d9-b17599723443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.489593916s
STEP: Saw pod success
Jul 10 08:08:12.649: INFO: Pod "downwardapi-volume-643440d2-7833-490d-80d9-b17599723443" satisfied condition "Succeeded or Failed"
Jul 10 08:08:12.811: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod downwardapi-volume-643440d2-7833-490d-80d9-b17599723443 container client-container: <nil>
STEP: delete the pod
Jul 10 08:08:13.144: INFO: Waiting for pod downwardapi-volume-643440d2-7833-490d-80d9-b17599723443 to disappear
Jul 10 08:08:13.308: INFO: Pod downwardapi-volume-643440d2-7833-490d-80d9-b17599723443 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.467 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":55,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:13.676: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 78 lines ...
STEP: creating a claim
Jul 10 08:07:22.615: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathndkjx] to have phase Bound
Jul 10 08:07:22.781: INFO: PersistentVolumeClaim csi-hostpathndkjx found but phase is Pending instead of Bound.
Jul 10 08:07:24.944: INFO: PersistentVolumeClaim csi-hostpathndkjx found and phase=Bound (2.328092086s)
STEP: Expanding non-expandable pvc
Jul 10 08:07:25.267: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul 10 08:07:25.593: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:27.918: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:29.924: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:31.920: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:33.934: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:35.923: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:37.926: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:39.918: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:41.918: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:43.918: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:45.919: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:47.920: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:49.918: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:51.918: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:53.924: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:55.920: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:07:56.245: INFO: Error updating pvc csi-hostpathndkjx: persistentvolumeclaims "csi-hostpathndkjx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul 10 08:07:56.245: INFO: Deleting PersistentVolumeClaim "csi-hostpathndkjx"
Jul 10 08:07:56.409: INFO: Waiting up to 5m0s for PersistentVolume pvc-1301ee70-0c85-4224-861a-73f6f122bd27 to get deleted
Jul 10 08:07:56.571: INFO: PersistentVolume pvc-1301ee70-0c85-4224-861a-73f6f122bd27 was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-6157
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":18,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 45 lines ...
• [SLOW TEST:42.295 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete pods when suspended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:111
------------------------------
{"msg":"PASSED [sig-apps] Job should delete pods when suspended","total":-1,"completed":11,"skipped":96,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:08:16.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-1704a60c-373e-4413-b890-a672a5b35c7d
STEP: Creating a pod to test consume secrets
Jul 10 08:08:17.146: INFO: Waiting up to 5m0s for pod "pod-secrets-1aa5f2e0-ba32-4104-b368-0351d0177e20" in namespace "secrets-8462" to be "Succeeded or Failed"
Jul 10 08:08:17.308: INFO: Pod "pod-secrets-1aa5f2e0-ba32-4104-b368-0351d0177e20": Phase="Pending", Reason="", readiness=false. Elapsed: 162.101245ms
Jul 10 08:08:19.471: INFO: Pod "pod-secrets-1aa5f2e0-ba32-4104-b368-0351d0177e20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.324723242s
STEP: Saw pod success
Jul 10 08:08:19.471: INFO: Pod "pod-secrets-1aa5f2e0-ba32-4104-b368-0351d0177e20" satisfied condition "Succeeded or Failed"
Jul 10 08:08:19.633: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod pod-secrets-1aa5f2e0-ba32-4104-b368-0351d0177e20 container secret-volume-test: <nil>
STEP: delete the pod
Jul 10 08:08:19.964: INFO: Waiting for pod pod-secrets-1aa5f2e0-ba32-4104-b368-0351d0177e20 to disappear
Jul 10 08:08:20.126: INFO: Pod pod-secrets-1aa5f2e0-ba32-4104-b368-0351d0177e20 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:08:20.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8462" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":20,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:20.479: INFO: Only supported for providers [gce gke] (not aws)
... skipping 105 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":10,"failed":2,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:21.419: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 67 lines ...
Jul 10 08:08:04.112: INFO: PersistentVolumeClaim pvc-g7vlb found but phase is Pending instead of Bound.
Jul 10 08:08:06.273: INFO: PersistentVolumeClaim pvc-g7vlb found and phase=Bound (15.295588164s)
Jul 10 08:08:06.273: INFO: Waiting up to 3m0s for PersistentVolume local-sh644 to have phase Bound
Jul 10 08:08:06.434: INFO: PersistentVolume local-sh644 found and phase=Bound (160.735895ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jwgs
STEP: Creating a pod to test subpath
Jul 10 08:08:06.918: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jwgs" in namespace "provisioning-5850" to be "Succeeded or Failed"
Jul 10 08:08:07.083: INFO: Pod "pod-subpath-test-preprovisionedpv-jwgs": Phase="Pending", Reason="", readiness=false. Elapsed: 164.008514ms
Jul 10 08:08:09.244: INFO: Pod "pod-subpath-test-preprovisionedpv-jwgs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32539927s
Jul 10 08:08:11.408: INFO: Pod "pod-subpath-test-preprovisionedpv-jwgs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.489408395s
STEP: Saw pod success
Jul 10 08:08:11.408: INFO: Pod "pod-subpath-test-preprovisionedpv-jwgs" satisfied condition "Succeeded or Failed"
Jul 10 08:08:11.569: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-jwgs container test-container-subpath-preprovisionedpv-jwgs: <nil>
STEP: delete the pod
Jul 10 08:08:11.899: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jwgs to disappear
Jul 10 08:08:12.061: INFO: Pod pod-subpath-test-preprovisionedpv-jwgs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jwgs
Jul 10 08:08:12.061: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jwgs" in namespace "provisioning-5850"
STEP: Creating pod pod-subpath-test-preprovisionedpv-jwgs
STEP: Creating a pod to test subpath
Jul 10 08:08:12.384: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jwgs" in namespace "provisioning-5850" to be "Succeeded or Failed"
Jul 10 08:08:12.545: INFO: Pod "pod-subpath-test-preprovisionedpv-jwgs": Phase="Pending", Reason="", readiness=false. Elapsed: 160.768364ms
Jul 10 08:08:14.706: INFO: Pod "pod-subpath-test-preprovisionedpv-jwgs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321996812s
Jul 10 08:08:16.867: INFO: Pod "pod-subpath-test-preprovisionedpv-jwgs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483236331s
Jul 10 08:08:19.029: INFO: Pod "pod-subpath-test-preprovisionedpv-jwgs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.645113598s
STEP: Saw pod success
Jul 10 08:08:19.029: INFO: Pod "pod-subpath-test-preprovisionedpv-jwgs" satisfied condition "Succeeded or Failed"
Jul 10 08:08:19.190: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-jwgs container test-container-subpath-preprovisionedpv-jwgs: <nil>
STEP: delete the pod
Jul 10 08:08:19.518: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jwgs to disappear
Jul 10 08:08:19.679: INFO: Pod pod-subpath-test-preprovisionedpv-jwgs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jwgs
Jul 10 08:08:19.679: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jwgs" in namespace "provisioning-5850"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:390
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":28,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 103 lines ...
• [SLOW TEST:16.977 seconds]
[sig-node] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":29,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:38.907: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 30 lines ...
Jul 10 08:07:08.901: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Jul 10 08:07:09.896: INFO: Successfully created a new PD: "aws://ap-northeast-2a/vol-0778791cf25b5597d".
Jul 10 08:07:09.896: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-d9m5
STEP: Creating a pod to test exec-volume-test
Jul 10 08:07:10.054: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-d9m5" in namespace "volume-9115" to be "Succeeded or Failed"
Jul 10 08:07:10.209: INFO: Pod "exec-volume-test-inlinevolume-d9m5": Phase="Pending", Reason="", readiness=false. Elapsed: 154.996464ms
Jul 10 08:07:12.365: INFO: Pod "exec-volume-test-inlinevolume-d9m5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311091982s
Jul 10 08:07:14.523: INFO: Pod "exec-volume-test-inlinevolume-d9m5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468931111s
Jul 10 08:07:16.679: INFO: Pod "exec-volume-test-inlinevolume-d9m5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.625529151s
Jul 10 08:07:18.836: INFO: Pod "exec-volume-test-inlinevolume-d9m5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.781840421s
Jul 10 08:07:20.992: INFO: Pod "exec-volume-test-inlinevolume-d9m5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.937580874s
... skipping 20 lines ...
Jul 10 08:08:06.315: INFO: Pod "exec-volume-test-inlinevolume-d9m5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.261430444s
Jul 10 08:08:08.472: INFO: Pod "exec-volume-test-inlinevolume-d9m5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.41794363s
Jul 10 08:08:10.629: INFO: Pod "exec-volume-test-inlinevolume-d9m5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.575074506s
Jul 10 08:08:12.785: INFO: Pod "exec-volume-test-inlinevolume-d9m5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.730907583s
Jul 10 08:08:14.941: INFO: Pod "exec-volume-test-inlinevolume-d9m5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m4.887107251s
STEP: Saw pod success
Jul 10 08:08:14.941: INFO: Pod "exec-volume-test-inlinevolume-d9m5" satisfied condition "Succeeded or Failed"
Jul 10 08:08:15.097: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod exec-volume-test-inlinevolume-d9m5 container exec-container-inlinevolume-d9m5: <nil>
STEP: delete the pod
Jul 10 08:08:15.413: INFO: Waiting for pod exec-volume-test-inlinevolume-d9m5 to disappear
Jul 10 08:08:15.568: INFO: Pod exec-volume-test-inlinevolume-d9m5 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-d9m5
Jul 10 08:08:15.568: INFO: Deleting pod "exec-volume-test-inlinevolume-d9m5" in namespace "volume-9115"
Jul 10 08:08:15.999: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0778791cf25b5597d", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0778791cf25b5597d is currently attached to i-081318d29ecd58b29
	status code: 400, request id: 796024fb-9078-4d05-9022-af608f3bb0ee
Jul 10 08:08:21.762: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0778791cf25b5597d", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0778791cf25b5597d is currently attached to i-081318d29ecd58b29
	status code: 400, request id: 87073dab-eaad-4e55-8504-2f42be1bc2c0
Jul 10 08:08:27.531: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0778791cf25b5597d", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0778791cf25b5597d is currently attached to i-081318d29ecd58b29
	status code: 400, request id: af6582f7-4544-4f07-8d1f-2c4c0e662c24
Jul 10 08:08:33.291: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0778791cf25b5597d", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0778791cf25b5597d is currently attached to i-081318d29ecd58b29
	status code: 400, request id: 17f61443-9b73-4c00-949e-3f9d7c120a1f
Jul 10 08:08:39.106: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-0778791cf25b5597d".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:08:39.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9115" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":89,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:39.438: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 41 lines ...
  Only supported for providers [gce] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:08:17.873: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Jul 10 08:08:33.400: INFO: PersistentVolumeClaim pvc-rsjpn found but phase is Pending instead of Bound.
Jul 10 08:08:35.556: INFO: PersistentVolumeClaim pvc-rsjpn found and phase=Bound (10.940518982s)
Jul 10 08:08:35.556: INFO: Waiting up to 3m0s for PersistentVolume local-wbk7m to have phase Bound
Jul 10 08:08:35.712: INFO: PersistentVolume local-wbk7m found and phase=Bound (155.137936ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rkjh
STEP: Creating a pod to test subpath
Jul 10 08:08:36.182: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rkjh" in namespace "provisioning-2831" to be "Succeeded or Failed"
Jul 10 08:08:36.337: INFO: Pod "pod-subpath-test-preprovisionedpv-rkjh": Phase="Pending", Reason="", readiness=false. Elapsed: 155.379506ms
Jul 10 08:08:38.494: INFO: Pod "pod-subpath-test-preprovisionedpv-rkjh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312774901s
Jul 10 08:08:40.652: INFO: Pod "pod-subpath-test-preprovisionedpv-rkjh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470741436s
Jul 10 08:08:42.810: INFO: Pod "pod-subpath-test-preprovisionedpv-rkjh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.6280888s
STEP: Saw pod success
Jul 10 08:08:42.810: INFO: Pod "pod-subpath-test-preprovisionedpv-rkjh" satisfied condition "Succeeded or Failed"
Jul 10 08:08:42.965: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-rkjh container test-container-subpath-preprovisionedpv-rkjh: <nil>
STEP: delete the pod
Jul 10 08:08:43.283: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rkjh to disappear
Jul 10 08:08:43.439: INFO: Pod pod-subpath-test-preprovisionedpv-rkjh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rkjh
Jul 10 08:08:43.439: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rkjh" in namespace "provisioning-2831"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:360
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:45.570: INFO: Only supported for providers [openstack] (not aws)
... skipping 41 lines ...
Jul 10 08:08:33.529: INFO: PersistentVolumeClaim pvc-vv6gf found but phase is Pending instead of Bound.
Jul 10 08:08:35.693: INFO: PersistentVolumeClaim pvc-vv6gf found and phase=Bound (13.147605689s)
Jul 10 08:08:35.693: INFO: Waiting up to 3m0s for PersistentVolume local-2cx8t to have phase Bound
Jul 10 08:08:35.855: INFO: PersistentVolume local-2cx8t found and phase=Bound (162.599034ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bqfc
STEP: Creating a pod to test subpath
Jul 10 08:08:36.345: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bqfc" in namespace "provisioning-511" to be "Succeeded or Failed"
Jul 10 08:08:36.511: INFO: Pod "pod-subpath-test-preprovisionedpv-bqfc": Phase="Pending", Reason="", readiness=false. Elapsed: 165.638044ms
Jul 10 08:08:38.675: INFO: Pod "pod-subpath-test-preprovisionedpv-bqfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329953068s
Jul 10 08:08:40.839: INFO: Pod "pod-subpath-test-preprovisionedpv-bqfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.493379973s
Jul 10 08:08:43.002: INFO: Pod "pod-subpath-test-preprovisionedpv-bqfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.656890766s
STEP: Saw pod success
Jul 10 08:08:43.002: INFO: Pod "pod-subpath-test-preprovisionedpv-bqfc" satisfied condition "Succeeded or Failed"
Jul 10 08:08:43.165: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-bqfc container test-container-subpath-preprovisionedpv-bqfc: <nil>
STEP: delete the pod
Jul 10 08:08:43.499: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bqfc to disappear
Jul 10 08:08:43.662: INFO: Pod pod-subpath-test-preprovisionedpv-bqfc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bqfc
Jul 10 08:08:43.662: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bqfc" in namespace "provisioning-511"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:360
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":9,"skipped":59,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:5.949 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1582
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":6,"skipped":49,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:46.156: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 82 lines ...
• [SLOW TEST:7.275 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:136
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":7,"skipped":92,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:46.786: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 93 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Jul 10 08:08:47.809: INFO: Waiting up to 5m0s for pod "busybox-user-0-24b4ad57-3bcd-4b9b-b782-7714bc8470d1" in namespace "security-context-test-3809" to be "Succeeded or Failed"
Jul 10 08:08:47.965: INFO: Pod "busybox-user-0-24b4ad57-3bcd-4b9b-b782-7714bc8470d1": Phase="Pending", Reason="", readiness=false. Elapsed: 155.497016ms
Jul 10 08:08:50.121: INFO: Pod "busybox-user-0-24b4ad57-3bcd-4b9b-b782-7714bc8470d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.311382564s
Jul 10 08:08:50.121: INFO: Pod "busybox-user-0-24b4ad57-3bcd-4b9b-b782-7714bc8470d1" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:08:50.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3809" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":111,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:50.464: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
... skipping 48 lines ...
• [SLOW TEST:15.889 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":7,"skipped":86,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:51.313: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 90 lines ...
Jul 10 08:07:38.360: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6088 to register on node ip-172-20-35-182.ap-northeast-2.compute.internal
STEP: Creating pod
Jul 10 08:07:48.657: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul 10 08:07:48.818: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-dq5s9] to have phase Bound
Jul 10 08:07:48.977: INFO: PersistentVolumeClaim pvc-dq5s9 found and phase=Bound (158.608644ms)
STEP: checking for CSIInlineVolumes feature
Jul 10 08:08:04.095: INFO: Error getting logs for pod inline-volume-vdqlf: the server rejected our request for an unknown reason (get pods inline-volume-vdqlf)
Jul 10 08:08:04.414: INFO: Deleting pod "inline-volume-vdqlf" in namespace "csi-mock-volumes-6088"
Jul 10 08:08:04.574: INFO: Wait up to 5m0s for pod "inline-volume-vdqlf" to be fully deleted
STEP: Deleting the previously created pod
Jul 10 08:08:14.893: INFO: Deleting pod "pvc-volume-tester-fxnjx" in namespace "csi-mock-volumes-6088"
Jul 10 08:08:15.054: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fxnjx" to be fully deleted
STEP: Checking CSI driver logs
Jul 10 08:08:27.538: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: ca13cfaa-c0e8-4eac-b938-e2883d932714
Jul 10 08:08:27.538: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jul 10 08:08:27.538: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Jul 10 08:08:27.538: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-fxnjx
Jul 10 08:08:27.538: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-6088
Jul 10 08:08:27.538: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ca13cfaa-c0e8-4eac-b938-e2883d932714/volumes/kubernetes.io~csi/pvc-5ee1ec01-8d89-4fc3-956f-935ebebcb679/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-fxnjx
Jul 10 08:08:27.538: INFO: Deleting pod "pvc-volume-tester-fxnjx" in namespace "csi-mock-volumes-6088"
STEP: Deleting claim pvc-dq5s9
Jul 10 08:08:28.015: INFO: Waiting up to 2m0s for PersistentVolume pvc-5ee1ec01-8d89-4fc3-956f-935ebebcb679 to get deleted
Jul 10 08:08:28.174: INFO: PersistentVolume pvc-5ee1ec01-8d89-4fc3-956f-935ebebcb679 was removed
STEP: Deleting storageclass csi-mock-volumes-6088-scqlmns
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":10,"skipped":125,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:52.403: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-e0dc3163-e996-4d35-a357-93cf31a8529f
STEP: Creating secret with name secret-projected-all-test-volume-4e377484-47e8-41bf-bbe1-d5431265ad22
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul 10 08:08:52.634: INFO: Waiting up to 5m0s for pod "projected-volume-4683242d-7b54-42fc-8846-3c6f2d8c6d80" in namespace "projected-9954" to be "Succeeded or Failed"
Jul 10 08:08:52.796: INFO: Pod "projected-volume-4683242d-7b54-42fc-8846-3c6f2d8c6d80": Phase="Pending", Reason="", readiness=false. Elapsed: 162.561475ms
Jul 10 08:08:54.956: INFO: Pod "projected-volume-4683242d-7b54-42fc-8846-3c6f2d8c6d80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321991365s
STEP: Saw pod success
Jul 10 08:08:54.956: INFO: Pod "projected-volume-4683242d-7b54-42fc-8846-3c6f2d8c6d80" satisfied condition "Succeeded or Failed"
Jul 10 08:08:55.115: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod projected-volume-4683242d-7b54-42fc-8846-3c6f2d8c6d80 container projected-all-volume-test: <nil>
STEP: delete the pod
Jul 10 08:08:55.440: INFO: Waiting for pod projected-volume-4683242d-7b54-42fc-8846-3c6f2d8c6d80 to disappear
Jul 10 08:08:55.601: INFO: Pod projected-volume-4683242d-7b54-42fc-8846-3c6f2d8c6d80 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 5 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:08:50.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:227
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:08:55.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1841" for this suite.


• [SLOW TEST:5.559 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:227
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":9,"skipped":119,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:56.104: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":16,"failed":2,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:08:57.516: INFO: Only supported for providers [openstack] (not aws)
... skipping 101 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":10,"skipped":62,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:40.118 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not create pods when created in suspend state
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:73
------------------------------
{"msg":"PASSED [sig-apps] Job should not create pods when created in suspend state","total":-1,"completed":3,"skipped":32,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:00.700: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":66,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:01.927: INFO: Only supported for providers [gce gke] (not aws)
... skipping 62 lines ...
• [SLOW TEST:6.499 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replicaset should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":10,"skipped":121,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:02.634: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 10 08:08:53.550: INFO: The status of Pod server-envvars-a5543aa2-c445-4083-83c7-f4856866990a is Pending, waiting for it to be Running (with Ready = true)
Jul 10 08:08:55.711: INFO: The status of Pod server-envvars-a5543aa2-c445-4083-83c7-f4856866990a is Pending, waiting for it to be Running (with Ready = true)
Jul 10 08:08:57.711: INFO: The status of Pod server-envvars-a5543aa2-c445-4083-83c7-f4856866990a is Running (Ready = true)
Jul 10 08:08:58.193: INFO: Waiting up to 5m0s for pod "client-envvars-5e885c81-7e71-430d-876b-95269f371e7c" in namespace "pods-1044" to be "Succeeded or Failed"
Jul 10 08:08:58.352: INFO: Pod "client-envvars-5e885c81-7e71-430d-876b-95269f371e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 158.517827ms
Jul 10 08:09:00.512: INFO: Pod "client-envvars-5e885c81-7e71-430d-876b-95269f371e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318676476s
Jul 10 08:09:02.672: INFO: Pod "client-envvars-5e885c81-7e71-430d-876b-95269f371e7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.479047108s
STEP: Saw pod success
Jul 10 08:09:02.672: INFO: Pod "client-envvars-5e885c81-7e71-430d-876b-95269f371e7c" satisfied condition "Succeeded or Failed"
Jul 10 08:09:02.832: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod client-envvars-5e885c81-7e71-430d-876b-95269f371e7c container env3cont: <nil>
STEP: delete the pod
Jul 10 08:09:03.156: INFO: Waiting for pod client-envvars-5e885c81-7e71-430d-876b-95269f371e7c to disappear
Jul 10 08:09:03.315: INFO: Pod client-envvars-5e885c81-7e71-430d-876b-95269f371e7c no longer exists
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.201 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":132,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:03.676: INFO: Only supported for providers [azure] (not aws)
... skipping 110 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:09:08.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-7836" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":12,"skipped":144,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:08.701: INFO: Only supported for providers [gce gke] (not aws)
... skipping 60 lines ...
      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":97,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:09:07.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jul 10 08:09:08.157: INFO: Waiting up to 5m0s for pod "security-context-2c7f052a-6fcd-45a0-a797-eb6ca0829baa" in namespace "security-context-5715" to be "Succeeded or Failed"
Jul 10 08:09:08.318: INFO: Pod "security-context-2c7f052a-6fcd-45a0-a797-eb6ca0829baa": Phase="Pending", Reason="", readiness=false. Elapsed: 160.903406ms
Jul 10 08:09:10.480: INFO: Pod "security-context-2c7f052a-6fcd-45a0-a797-eb6ca0829baa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.322779491s
STEP: Saw pod success
Jul 10 08:09:10.480: INFO: Pod "security-context-2c7f052a-6fcd-45a0-a797-eb6ca0829baa" satisfied condition "Succeeded or Failed"
Jul 10 08:09:10.641: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod security-context-2c7f052a-6fcd-45a0-a797-eb6ca0829baa container test-container: <nil>
STEP: delete the pod
Jul 10 08:09:11.013: INFO: Waiting for pod security-context-2c7f052a-6fcd-45a0-a797-eb6ca0829baa to disappear
Jul 10 08:09:11.174: INFO: Pod security-context-2c7f052a-6fcd-45a0-a797-eb6ca0829baa no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:09:11.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-5715" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":9,"skipped":97,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:11.532: INFO: Only supported for providers [gce gke] (not aws)
... skipping 114 lines ...
• [SLOW TEST:26.641 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:12.239: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:09:13.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3929" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":13,"skipped":160,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 10 08:09:13.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be2b4417-0872-4769-896d-32a5f19741f6" in namespace "downward-api-6433" to be "Succeeded or Failed"
Jul 10 08:09:13.369: INFO: Pod "downwardapi-volume-be2b4417-0872-4769-896d-32a5f19741f6": Phase="Pending", Reason="", readiness=false. Elapsed: 155.719256ms
Jul 10 08:09:15.525: INFO: Pod "downwardapi-volume-be2b4417-0872-4769-896d-32a5f19741f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.311935463s
STEP: Saw pod success
Jul 10 08:09:15.525: INFO: Pod "downwardapi-volume-be2b4417-0872-4769-896d-32a5f19741f6" satisfied condition "Succeeded or Failed"
Jul 10 08:09:15.681: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod downwardapi-volume-be2b4417-0872-4769-896d-32a5f19741f6 container client-container: <nil>
STEP: delete the pod
Jul 10 08:09:16.009: INFO: Waiting for pod downwardapi-volume-be2b4417-0872-4769-896d-32a5f19741f6 to disappear
Jul 10 08:09:16.165: INFO: Pod downwardapi-volume-be2b4417-0872-4769-896d-32a5f19741f6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:09:16.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6433" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:09:16.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6784" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":14,"skipped":164,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:17.282: INFO: Only supported for providers [gce gke] (not aws)
... skipping 61 lines ...
• [SLOW TEST:13.514 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:57
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":10,"skipped":111,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:09:25.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 45 lines ...
Jul 10 08:09:18.743: INFO: PersistentVolumeClaim pvc-dq2vl found but phase is Pending instead of Bound.
Jul 10 08:09:20.907: INFO: PersistentVolumeClaim pvc-dq2vl found and phase=Bound (15.312351418s)
Jul 10 08:09:20.907: INFO: Waiting up to 3m0s for PersistentVolume local-jqjfn to have phase Bound
Jul 10 08:09:21.069: INFO: PersistentVolume local-jqjfn found and phase=Bound (162.041147ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-5qcn
STEP: Creating a pod to test subpath
Jul 10 08:09:21.556: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5qcn" in namespace "provisioning-4676" to be "Succeeded or Failed"
Jul 10 08:09:21.719: INFO: Pod "pod-subpath-test-preprovisionedpv-5qcn": Phase="Pending", Reason="", readiness=false. Elapsed: 162.074667ms
Jul 10 08:09:23.882: INFO: Pod "pod-subpath-test-preprovisionedpv-5qcn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.325129344s
STEP: Saw pod success
Jul 10 08:09:23.882: INFO: Pod "pod-subpath-test-preprovisionedpv-5qcn" satisfied condition "Succeeded or Failed"
Jul 10 08:09:24.044: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-5qcn container test-container-volume-preprovisionedpv-5qcn: <nil>
STEP: delete the pod
Jul 10 08:09:24.384: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5qcn to disappear
Jul 10 08:09:24.547: INFO: Pod pod-subpath-test-preprovisionedpv-5qcn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-5qcn
Jul 10 08:09:24.547: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5qcn" in namespace "provisioning-4676"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":38,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:26.762: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:35.689: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:04:27.626: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
Jul 10 08:04:28.432: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6799cvj6x
STEP: creating a claim
Jul 10 08:04:28.595: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-pn4m
STEP: Creating a pod to test subpath
Jul 10 08:04:29.085: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-pn4m" in namespace "provisioning-6799" to be "Succeeded or Failed"
Jul 10 08:04:29.259: INFO: Pod "pod-subpath-test-dynamicpv-pn4m": Phase="Pending", Reason="", readiness=false. Elapsed: 173.947667ms
Jul 10 08:04:31.421: INFO: Pod "pod-subpath-test-dynamicpv-pn4m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336431799s
Jul 10 08:04:33.582: INFO: Pod "pod-subpath-test-dynamicpv-pn4m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.497701582s
Jul 10 08:04:35.746: INFO: Pod "pod-subpath-test-dynamicpv-pn4m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.661403316s
Jul 10 08:04:37.910: INFO: Pod "pod-subpath-test-dynamicpv-pn4m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.824809631s
Jul 10 08:04:40.075: INFO: Pod "pod-subpath-test-dynamicpv-pn4m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.989885667s
... skipping 133 lines ...
Jul 10 08:09:30.016: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-pn4m" container "init-volume-dynamicpv-pn4m": 
Jul 10 08:09:30.177: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-pn4m" container "test-init-subpath-dynamicpv-pn4m": 
Jul 10 08:09:30.338: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-pn4m" container "test-container-subpath-dynamicpv-pn4m": 
STEP: delete the pod
Jul 10 08:09:30.503: INFO: Waiting for pod pod-subpath-test-dynamicpv-pn4m to disappear
Jul 10 08:09:30.664: INFO: Pod pod-subpath-test-dynamicpv-pn4m no longer exists
Jul 10 08:09:30.665: FAIL: Unexpected error:
    <*errors.errorString | 0xc002de4c80>: {
        s: "expected pod \"pod-subpath-test-dynamicpv-pn4m\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-pn4m\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-subpath-test-dynamicpv-pn4m" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-pn4m" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00244a160, 0x7058f38, 0x7, 0xc002b70800, 0x0, 0xc0013a3080, 0x1, 0x1, 0x72c4fd0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "provisioning-6799".
STEP: Found 6 events.
Jul 10 08:09:31.474: INFO: At 2021-07-10 08:04:28 +0000 UTC - event for aws49hft: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul 10 08:09:31.474: INFO: At 2021-07-10 08:04:29 +0000 UTC - event for aws49hft: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-t4r8x_2d6d2a00-af92-42c3-bf90-3c4cb558e322 } Provisioning: External provisioner is provisioning volume for claim "provisioning-6799/aws49hft"
Jul 10 08:09:31.474: INFO: At 2021-07-10 08:04:29 +0000 UTC - event for aws49hft: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul 10 08:09:31.474: INFO: At 2021-07-10 08:04:39 +0000 UTC - event for aws49hft: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-t4r8x_2d6d2a00-af92-42c3-bf90-3c4cb558e322 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-6799cvj6x": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul 10 08:09:31.474: INFO: At 2021-07-10 08:04:50 +0000 UTC - event for aws49hft: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-t4r8x_2d6d2a00-af92-42c3-bf90-3c4cb558e322 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-6799cvj6x": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul 10 08:09:31.474: INFO: At 2021-07-10 08:09:31 +0000 UTC - event for pod-subpath-test-dynamicpv-pn4m: {default-scheduler } FailedScheduling: running PreBind plugin "VolumeBinding": binding volumes: pod does not exist any more: pod "pod-subpath-test-dynamicpv-pn4m" not found
Jul 10 08:09:31.634: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul 10 08:09:31.634: INFO: 
Jul 10 08:09:31.796: INFO: 
Logging node info for node ip-172-20-35-182.ap-northeast-2.compute.internal
Jul 10 08:09:31.957: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-35-182.ap-northeast-2.compute.internal    1a6c9be0-f2a0-437a-b5ad-99701c252cfb 11350 0 2021-07-10 08:00:56 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:nodes-ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-35-182.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-35-182.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-4486":"csi-mock-csi-mock-volumes-4486","ebs.csi.aws.com":"i-07e3a6916f931b901"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-10 08:00:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-10 08:00:56 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-10 08:00:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-10 08:01:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.5.0/24\"":{}}}} } {kubelet Update v1 2021-07-10 08:08:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-07e3a6916f931b901,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-10 08:08:26 +0000 UTC,LastTransitionTime:2021-07-10 08:00:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-10 08:08:26 +0000 UTC,LastTransitionTime:2021-07-10 08:00:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-10 08:08:26 +0000 UTC,LastTransitionTime:2021-07-10 08:00:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-10 08:08:26 +0000 UTC,LastTransitionTime:2021-07-10 08:01:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.35.182,},NodeAddress{Type:ExternalIP,Address:3.36.67.24,},NodeAddress{Type:InternalDNS,Address:ip-172-20-35-182.ap-northeast-2.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-35-182.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-36-67-24.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec24797d4aec87f81c975a28ffae5bb1,SystemUUID:ec24797d-4aec-87f8-1c97-5a28ffae5bb1,BootID:56033625-3348-442f-9130-e62e8679b3df,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.1,KubeProxyVersion:v1.22.0-beta.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.1],SizeBytes:105483977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 184 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:390

      Jul 10 08:09:30.665: Unexpected error:
          <*errors.errorString | 0xc002de4c80>: {
              s: "expected pod \"pod-subpath-test-dynamicpv-pn4m\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-pn4m\" to be \"Succeeded or Failed\"",
          }
          expected pod "pod-subpath-test-dynamicpv-pn4m" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-pn4m" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":9,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:37.651: INFO: Only supported for providers [gce gke] (not aws)
... skipping 55 lines ...
Jul 10 08:04:35.647: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-100579qdd
STEP: creating a claim
Jul 10 08:04:35.811: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-l7z5
STEP: Creating a pod to test exec-volume-test
Jul 10 08:04:36.299: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-l7z5" in namespace "volume-1005" to be "Succeeded or Failed"
Jul 10 08:04:36.475: INFO: Pod "exec-volume-test-dynamicpv-l7z5": Phase="Pending", Reason="", readiness=false. Elapsed: 176.149707ms
Jul 10 08:04:38.639: INFO: Pod "exec-volume-test-dynamicpv-l7z5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339864812s
Jul 10 08:04:40.802: INFO: Pod "exec-volume-test-dynamicpv-l7z5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.502880429s
Jul 10 08:04:42.964: INFO: Pod "exec-volume-test-dynamicpv-l7z5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.665429326s
Jul 10 08:04:45.127: INFO: Pod "exec-volume-test-dynamicpv-l7z5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.828517455s
Jul 10 08:04:47.291: INFO: Pod "exec-volume-test-dynamicpv-l7z5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.991937225s
... skipping 89 lines ...
Jul 10 08:08:01.925: INFO: Pod "exec-volume-test-dynamicpv-l7z5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m25.625946313s
Jul 10 08:08:04.087: INFO: Pod "exec-volume-test-dynamicpv-l7z5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m27.787958788s
Jul 10 08:08:06.251: INFO: Pod "exec-volume-test-dynamicpv-l7z5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m29.952536783s
Jul 10 08:08:08.413: INFO: Pod "exec-volume-test-dynamicpv-l7z5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.114660029s
Jul 10 08:08:10.577: INFO: Pod "exec-volume-test-dynamicpv-l7z5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3m34.277794874s
STEP: Saw pod success
Jul 10 08:08:10.577: INFO: Pod "exec-volume-test-dynamicpv-l7z5" satisfied condition "Succeeded or Failed"
Jul 10 08:08:10.738: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod exec-volume-test-dynamicpv-l7z5 container exec-container-dynamicpv-l7z5: <nil>
STEP: delete the pod
Jul 10 08:08:11.070: INFO: Waiting for pod exec-volume-test-dynamicpv-l7z5 to disappear
Jul 10 08:08:11.231: INFO: Pod exec-volume-test-dynamicpv-l7z5 no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-l7z5
Jul 10 08:08:11.231: INFO: Deleting pod "exec-volume-test-dynamicpv-l7z5" in namespace "volume-1005"
... skipping 177 lines ...
Jul 10 08:09:33.025: INFO: PersistentVolumeClaim pvc-hlbvq found but phase is Pending instead of Bound.
Jul 10 08:09:35.195: INFO: PersistentVolumeClaim pvc-hlbvq found and phase=Bound (2.331872488s)
Jul 10 08:09:35.195: INFO: Waiting up to 3m0s for PersistentVolume local-vtpfq to have phase Bound
Jul 10 08:09:35.356: INFO: PersistentVolume local-vtpfq found and phase=Bound (161.525816ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dzc6
STEP: Creating a pod to test subpath
Jul 10 08:09:35.844: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dzc6" in namespace "provisioning-902" to be "Succeeded or Failed"
Jul 10 08:09:36.006: INFO: Pod "pod-subpath-test-preprovisionedpv-dzc6": Phase="Pending", Reason="", readiness=false. Elapsed: 162.397687ms
Jul 10 08:09:38.171: INFO: Pod "pod-subpath-test-preprovisionedpv-dzc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326983468s
Jul 10 08:09:40.333: INFO: Pod "pod-subpath-test-preprovisionedpv-dzc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.489334989s
STEP: Saw pod success
Jul 10 08:09:40.333: INFO: Pod "pod-subpath-test-preprovisionedpv-dzc6" satisfied condition "Succeeded or Failed"
Jul 10 08:09:40.495: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-dzc6 container test-container-subpath-preprovisionedpv-dzc6: <nil>
STEP: delete the pod
Jul 10 08:09:40.825: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dzc6 to disappear
Jul 10 08:09:40.987: INFO: Pod pod-subpath-test-preprovisionedpv-dzc6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dzc6
Jul 10 08:09:40.987: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dzc6" in namespace "provisioning-902"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":45,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:45.526: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 108 lines ...
Jul 10 08:09:34.697: INFO: PersistentVolumeClaim pvc-7dq44 found but phase is Pending instead of Bound.
Jul 10 08:09:36.855: INFO: PersistentVolumeClaim pvc-7dq44 found and phase=Bound (13.095796356s)
Jul 10 08:09:36.855: INFO: Waiting up to 3m0s for PersistentVolume local-d8mcz to have phase Bound
Jul 10 08:09:37.010: INFO: PersistentVolume local-d8mcz found and phase=Bound (155.375357ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qfp6
STEP: Creating a pod to test subpath
Jul 10 08:09:37.479: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qfp6" in namespace "provisioning-9283" to be "Succeeded or Failed"
Jul 10 08:09:37.635: INFO: Pod "pod-subpath-test-preprovisionedpv-qfp6": Phase="Pending", Reason="", readiness=false. Elapsed: 156.533107ms
Jul 10 08:09:39.791: INFO: Pod "pod-subpath-test-preprovisionedpv-qfp6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312619798s
Jul 10 08:09:41.949: INFO: Pod "pod-subpath-test-preprovisionedpv-qfp6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.470031183s
STEP: Saw pod success
Jul 10 08:09:41.949: INFO: Pod "pod-subpath-test-preprovisionedpv-qfp6" satisfied condition "Succeeded or Failed"
Jul 10 08:09:42.104: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-qfp6 container test-container-subpath-preprovisionedpv-qfp6: <nil>
STEP: delete the pod
Jul 10 08:09:42.426: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qfp6 to disappear
Jul 10 08:09:42.582: INFO: Pod pod-subpath-test-preprovisionedpv-qfp6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qfp6
Jul 10 08:09:42.582: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qfp6" in namespace "provisioning-9283"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:360
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:48.189: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-4bc234eb-c4fa-4905-b423-a00a060a12a2
STEP: Creating a pod to test consume secrets
Jul 10 08:09:45.047: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-25d7df14-14b6-4aa7-ac20-dd0ae6d456fa" in namespace "projected-2600" to be "Succeeded or Failed"
Jul 10 08:09:45.206: INFO: Pod "pod-projected-secrets-25d7df14-14b6-4aa7-ac20-dd0ae6d456fa": Phase="Pending", Reason="", readiness=false. Elapsed: 158.937428ms
Jul 10 08:09:47.367: INFO: Pod "pod-projected-secrets-25d7df14-14b6-4aa7-ac20-dd0ae6d456fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.31959967s
STEP: Saw pod success
Jul 10 08:09:47.367: INFO: Pod "pod-projected-secrets-25d7df14-14b6-4aa7-ac20-dd0ae6d456fa" satisfied condition "Succeeded or Failed"
Jul 10 08:09:47.526: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-projected-secrets-25d7df14-14b6-4aa7-ac20-dd0ae6d456fa container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 10 08:09:47.868: INFO: Waiting for pod pod-projected-secrets-25d7df14-14b6-4aa7-ac20-dd0ae6d456fa to disappear
Jul 10 08:09:48.026: INFO: Pod pod-projected-secrets-25d7df14-14b6-4aa7-ac20-dd0ae6d456fa no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:09:48.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2600" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":193,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:48.357: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 127 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should not modify fsGroup if fsGroupPolicy=None
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":12,"skipped":98,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 10 08:09:49.151: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62586141-9901-4eb6-b9a4-6479387a52b3" in namespace "projected-2264" to be "Succeeded or Failed"
Jul 10 08:09:49.311: INFO: Pod "downwardapi-volume-62586141-9901-4eb6-b9a4-6479387a52b3": Phase="Pending", Reason="", readiness=false. Elapsed: 160.117447ms
Jul 10 08:09:51.467: INFO: Pod "downwardapi-volume-62586141-9901-4eb6-b9a4-6479387a52b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.316083742s
STEP: Saw pod success
Jul 10 08:09:51.467: INFO: Pod "downwardapi-volume-62586141-9901-4eb6-b9a4-6479387a52b3" satisfied condition "Succeeded or Failed"
Jul 10 08:09:51.623: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod downwardapi-volume-62586141-9901-4eb6-b9a4-6479387a52b3 container client-container: <nil>
STEP: delete the pod
Jul 10 08:09:51.941: INFO: Waiting for pod downwardapi-volume-62586141-9901-4eb6-b9a4-6479387a52b3 to disappear
Jul 10 08:09:52.096: INFO: Pod downwardapi-volume-62586141-9901-4eb6-b9a4-6479387a52b3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:09:52.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2264" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:52.458: INFO: Only supported for providers [vsphere] (not aws)
... skipping 99 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:09:52.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "health-4278" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":13,"skipped":100,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:53.216: INFO: Only supported for providers [gce gke] (not aws)
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:09:56.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-2689" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 95 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":11,"skipped":75,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:56.675: INFO: Only supported for providers [azure] (not aws)
... skipping 86 lines ...
Jul 10 08:08:06.639: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3497 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.65.16.53:80 2>&1 || true; echo; done'
Jul 10 08:09:44.389: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.16.53:80\n+ true\n+ echo\n"
Jul 10 08:09:44.390: INFO: stdout: "service-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-q2jqj\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\n"
Jul 10 08:09:44.390: INFO: Unable to reach the following endpoints of service 100.65.16.53: map[service-headless-toggled-dp6x6:{} service-headless-toggled-g5pqh:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3497
STEP: Deleting pod verify-service-up-exec-pod-wv5lw in namespace services-3497
Jul 10 08:09:50.186: FAIL: Unexpected error:
    <*errors.errorString | 0xc001c08090>: {
        s: "service verification failed for: 100.65.16.53\nexpected [service-headless-toggled-dp6x6 service-headless-toggled-g5pqh service-headless-toggled-q2jqj]\nreceived [service-headless-toggled-q2jqj wget: download timed out]",
    }
    service verification failed for: 100.65.16.53
    expected [service-headless-toggled-dp6x6 service-headless-toggled-g5pqh service-headless-toggled-q2jqj]
    received [service-headless-toggled-q2jqj wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.29()
... skipping 262 lines ...
• Failure [354.287 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1937

  Jul 10 08:09:50.186: Unexpected error:
      <*errors.errorString | 0xc001c08090>: {
          s: "service verification failed for: 100.65.16.53\nexpected [service-headless-toggled-dp6x6 service-headless-toggled-g5pqh service-headless-toggled-q2jqj]\nreceived [service-headless-toggled-q2jqj wget: download timed out]",
      }
      service verification failed for: 100.65.16.53
      expected [service-headless-toggled-dp6x6 service-headless-toggled-g5pqh service-headless-toggled-q2jqj]
      received [service-headless-toggled-q2jqj wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1962
------------------------------
{"msg":"FAILED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":0,"skipped":2,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:56.882: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 22 lines ...
STEP: Creating a kubernetes client
Jul 10 08:04:55.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:738
STEP: creating a StorageClass
STEP: Creating a StorageClass
STEP: creating a claim object with a suffix for gluster dynamic provisioner
Jul 10 08:04:56.123: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul 10 08:09:56.922: INFO: The test missed event about failed provisioning, but checked that no volume was provisioned for 5m0s
Jul 10 08:09:56.922: INFO: deleting claim "volume-provisioning-1097"/"pvc-pmrwg"
Jul 10 08:09:57.081: INFO: deleting storage class volume-provisioning-1097-invalid-aws9xcfv
[AfterEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:09:57.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-1097" for this suite.


• [SLOW TEST:302.560 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:737
    should report an error and create no PV
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:738
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":6,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:09:57.574: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":12,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:09:36.111: INFO: >>> kubeConfig: /root/.kube/config
... skipping 19 lines ...
Jul 10 08:09:48.022: INFO: PersistentVolumeClaim pvc-fz9rb found but phase is Pending instead of Bound.
Jul 10 08:09:50.185: INFO: PersistentVolumeClaim pvc-fz9rb found and phase=Bound (6.645834215s)
Jul 10 08:09:50.185: INFO: Waiting up to 3m0s for PersistentVolume local-qxcqr to have phase Bound
Jul 10 08:09:50.346: INFO: PersistentVolume local-qxcqr found and phase=Bound (160.805647ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4mtg
STEP: Creating a pod to test subpath
Jul 10 08:09:50.827: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4mtg" in namespace "provisioning-2658" to be "Succeeded or Failed"
Jul 10 08:09:50.987: INFO: Pod "pod-subpath-test-preprovisionedpv-4mtg": Phase="Pending", Reason="", readiness=false. Elapsed: 159.944067ms
Jul 10 08:09:53.149: INFO: Pod "pod-subpath-test-preprovisionedpv-4mtg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321603932s
Jul 10 08:09:55.310: INFO: Pod "pod-subpath-test-preprovisionedpv-4mtg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482303398s
Jul 10 08:09:57.472: INFO: Pod "pod-subpath-test-preprovisionedpv-4mtg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.644109534s
STEP: Saw pod success
Jul 10 08:09:57.472: INFO: Pod "pod-subpath-test-preprovisionedpv-4mtg" satisfied condition "Succeeded or Failed"
Jul 10 08:09:57.631: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-4mtg container test-container-volume-preprovisionedpv-4mtg: <nil>
STEP: delete the pod
Jul 10 08:09:57.960: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4mtg to disappear
Jul 10 08:09:58.120: INFO: Pod pod-subpath-test-preprovisionedpv-4mtg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4mtg
Jul 10 08:09:58.120: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4mtg" in namespace "provisioning-2658"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":13,"skipped":77,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:10:03.581: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 178 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":16,"skipped":195,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:10:09.095: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 111 lines ...
• [SLOW TEST:17.089 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:52
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":11,"skipped":131,"failed":0}
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:10:05.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:5.002 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":12,"skipped":131,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:10:10.535: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 124 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":44,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:09:40.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should list, patch and delete a collection of StatefulSets [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":5,"skipped":44,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":16,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:10:16.790: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumelimits.go:238

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:05:09.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 154 lines ...
Jul 10 08:10:05.265: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Jul 10 08:10:07.266: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Jul 10 08:10:09.269: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Jul 10 08:10:11.265: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Jul 10 08:10:13.265: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Jul 10 08:10:13.439: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Jul 10 08:10:13.439: FAIL: Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 22 lines ...
Jul 10 08:10:13.606: INFO: At 2021-07-10 08:05:10 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-35-182.ap-northeast-2.compute.internal} Started: Started container agnhost-container
Jul 10 08:10:13.606: INFO: At 2021-07-10 08:05:12 +0000 UTC - event for pod-with-poststart-http-hook: {default-scheduler } Scheduled: Successfully assigned container-lifecycle-hook-4543/pod-with-poststart-http-hook to ip-172-20-49-206.ap-northeast-2.compute.internal
Jul 10 08:10:13.606: INFO: At 2021-07-10 08:05:13 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} Pulling: Pulling image "k8s.gcr.io/pause:3.5"
Jul 10 08:10:13.606: INFO: At 2021-07-10 08:05:15 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.5" in 1.907595814s
Jul 10 08:10:13.606: INFO: At 2021-07-10 08:05:15 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} Created: Created container pod-with-poststart-http-hook
Jul 10 08:10:13.606: INFO: At 2021-07-10 08:05:15 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} Started: Started container pod-with-poststart-http-hook
Jul 10 08:10:13.606: INFO: At 2021-07-10 08:05:45 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} FailedPostStartHook: HTTP lifecycle hook (/echo?msg=poststart) for Container "pod-with-poststart-http-hook" in Pod "pod-with-poststart-http-hook_container-lifecycle-hook-4543(c66b6ffd-be4b-4b97-88a8-fda6cab64f91)" failed - error: Get "http://100.96.5.32:8080//echo?msg=poststart": dial tcp 100.96.5.32:8080: i/o timeout, message: ""
Jul 10 08:10:13.606: INFO: At 2021-07-10 08:05:45 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} Killing: FailedPostStartHook
Jul 10 08:10:13.606: INFO: At 2021-07-10 08:05:46 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.5" already present on machine
Jul 10 08:10:13.606: INFO: At 2021-07-10 08:06:17 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} BackOff: Back-off restarting failed container
Jul 10 08:10:13.770: INFO: POD                           NODE                                              PHASE    GRACE  CONDITIONS
Jul 10 08:10:13.771: INFO: pod-handle-http-request       ip-172-20-35-182.ap-northeast-2.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:05:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:05:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:05:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:05:10 +0000 UTC  }]
Jul 10 08:10:13.771: INFO: pod-with-poststart-http-hook  ip-172-20-49-206.ap-northeast-2.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:05:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:05:12 +0000 UTC ContainersNotReady containers with unready status: [pod-with-poststart-http-hook]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:05:12 +0000 UTC ContainersNotReady containers with unready status: [pod-with-poststart-http-hook]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:05:12 +0000 UTC  }]
Jul 10 08:10:13.771: INFO: 
Jul 10 08:10:13.937: INFO: 
Logging node info for node ip-172-20-35-182.ap-northeast-2.compute.internal
... skipping 207 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul 10 08:10:13.439: Unexpected error:
        <*errors.errorString | 0xc000248250>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103
------------------------------
{"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:10:19.748: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 91 lines ...
• [SLOW TEST:7.589 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":5,"skipped":19,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:10:24.472: INFO: Only supported for providers [gce gke] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-79a945be-40fb-4fcb-a5dc-aa1183d23dbc
STEP: Creating a pod to test consume configMaps
Jul 10 08:10:15.547: INFO: Waiting up to 5m0s for pod "pod-configmaps-d46b7172-5649-415d-83d9-4c2f22c0e710" in namespace "configmap-631" to be "Succeeded or Failed"
Jul 10 08:10:15.709: INFO: Pod "pod-configmaps-d46b7172-5649-415d-83d9-4c2f22c0e710": Phase="Pending", Reason="", readiness=false. Elapsed: 162.884698ms
Jul 10 08:10:17.871: INFO: Pod "pod-configmaps-d46b7172-5649-415d-83d9-4c2f22c0e710": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324620738s
Jul 10 08:10:20.033: INFO: Pod "pod-configmaps-d46b7172-5649-415d-83d9-4c2f22c0e710": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48678974s
Jul 10 08:10:22.198: INFO: Pod "pod-configmaps-d46b7172-5649-415d-83d9-4c2f22c0e710": Phase="Pending", Reason="", readiness=false. Elapsed: 6.651251533s
Jul 10 08:10:24.359: INFO: Pod "pod-configmaps-d46b7172-5649-415d-83d9-4c2f22c0e710": Phase="Pending", Reason="", readiness=false. Elapsed: 8.812906516s
Jul 10 08:10:26.522: INFO: Pod "pod-configmaps-d46b7172-5649-415d-83d9-4c2f22c0e710": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.97537312s
STEP: Saw pod success
Jul 10 08:10:26.522: INFO: Pod "pod-configmaps-d46b7172-5649-415d-83d9-4c2f22c0e710" satisfied condition "Succeeded or Failed"
Jul 10 08:10:26.701: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-configmaps-d46b7172-5649-415d-83d9-4c2f22c0e710 container agnhost-container: <nil>
STEP: delete the pod
Jul 10 08:10:27.038: INFO: Waiting for pod pod-configmaps-d46b7172-5649-415d-83d9-4c2f22c0e710 to disappear
Jul 10 08:10:27.198: INFO: Pod pod-configmaps-d46b7172-5649-415d-83d9-4c2f22c0e710 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:13.106 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:10:27.549: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":14,"skipped":103,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:10:10.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:431
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":15,"skipped":103,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:10:27.702: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 68395 lines ...
• Failure [789.066 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 10 08:33:12.246: Unexpected error:
      <*errors.errorString | 0xc0002b8240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":12,"skipped":85,"failed":4,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:18.802: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 88 lines ...
• [SLOW TEST:5.113 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update/patch PodDisruptionBudget status [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":30,"skipped":184,"failed":4,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "services-8461" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":13,"skipped":93,"failed":4,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:20.137: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 69 lines ...
• [SLOW TEST:23.582 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":26,"skipped":194,"failed":4,"failures":["[sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:20.677: INFO: Only supported for providers [gce gke] (not aws)
... skipping 265 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":43,"skipped":399,"failed":3,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:33:19.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-c11f3d5a-0a30-4498-9ad3-93a50ff9f324
STEP: Creating a pod to test consume secrets
Jul 10 08:33:20.285: INFO: Waiting up to 5m0s for pod "pod-secrets-a1cb8f31-d7bc-4e39-9f4d-2a754245dfb0" in namespace "secrets-1461" to be "Succeeded or Failed"
Jul 10 08:33:20.440: INFO: Pod "pod-secrets-a1cb8f31-d7bc-4e39-9f4d-2a754245dfb0": Phase="Pending", Reason="", readiness=false. Elapsed: 154.635745ms
Jul 10 08:33:22.595: INFO: Pod "pod-secrets-a1cb8f31-d7bc-4e39-9f4d-2a754245dfb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.309986992s
STEP: Saw pod success
Jul 10 08:33:22.595: INFO: Pod "pod-secrets-a1cb8f31-d7bc-4e39-9f4d-2a754245dfb0" satisfied condition "Succeeded or Failed"
Jul 10 08:33:22.751: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-secrets-a1cb8f31-d7bc-4e39-9f4d-2a754245dfb0 container secret-volume-test: <nil>
STEP: delete the pod
Jul 10 08:33:23.071: INFO: Waiting for pod pod-secrets-a1cb8f31-d7bc-4e39-9f4d-2a754245dfb0 to disappear
Jul 10 08:33:23.225: INFO: Pod pod-secrets-a1cb8f31-d7bc-4e39-9f4d-2a754245dfb0 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:33:23.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1461" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":185,"failed":4,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:23.552: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
Jul 10 08:33:14.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jul 10 08:33:15.019: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 10 08:33:15.346: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-867" in namespace "provisioning-867" to be "Succeeded or Failed"
Jul 10 08:33:15.508: INFO: Pod "hostpath-symlink-prep-provisioning-867": Phase="Pending", Reason="", readiness=false. Elapsed: 161.755736ms
Jul 10 08:33:17.670: INFO: Pod "hostpath-symlink-prep-provisioning-867": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.324052262s
STEP: Saw pod success
Jul 10 08:33:17.671: INFO: Pod "hostpath-symlink-prep-provisioning-867" satisfied condition "Succeeded or Failed"
Jul 10 08:33:17.671: INFO: Deleting pod "hostpath-symlink-prep-provisioning-867" in namespace "provisioning-867"
Jul 10 08:33:17.839: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-867" to be fully deleted
Jul 10 08:33:18.001: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-wwjf
STEP: Creating a pod to test subpath
Jul 10 08:33:18.175: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-wwjf" in namespace "provisioning-867" to be "Succeeded or Failed"
Jul 10 08:33:18.336: INFO: Pod "pod-subpath-test-inlinevolume-wwjf": Phase="Pending", Reason="", readiness=false. Elapsed: 161.106417ms
Jul 10 08:33:20.498: INFO: Pod "pod-subpath-test-inlinevolume-wwjf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.322635614s
STEP: Saw pod success
Jul 10 08:33:20.498: INFO: Pod "pod-subpath-test-inlinevolume-wwjf" satisfied condition "Succeeded or Failed"
Jul 10 08:33:20.660: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-wwjf container test-container-subpath-inlinevolume-wwjf: <nil>
STEP: delete the pod
Jul 10 08:33:20.992: INFO: Waiting for pod pod-subpath-test-inlinevolume-wwjf to disappear
Jul 10 08:33:21.155: INFO: Pod pod-subpath-test-inlinevolume-wwjf no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-wwjf
Jul 10 08:33:21.155: INFO: Deleting pod "pod-subpath-test-inlinevolume-wwjf" in namespace "provisioning-867"
STEP: Deleting pod
Jul 10 08:33:21.316: INFO: Deleting pod "pod-subpath-test-inlinevolume-wwjf" in namespace "provisioning-867"
Jul 10 08:33:21.639: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-867" in namespace "provisioning-867" to be "Succeeded or Failed"
Jul 10 08:33:21.801: INFO: Pod "hostpath-symlink-prep-provisioning-867": Phase="Pending", Reason="", readiness=false. Elapsed: 161.970545ms
Jul 10 08:33:23.963: INFO: Pod "hostpath-symlink-prep-provisioning-867": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.323464002s
STEP: Saw pod success
Jul 10 08:33:23.963: INFO: Pod "hostpath-symlink-prep-provisioning-867" satisfied condition "Succeeded or Failed"
Jul 10 08:33:23.963: INFO: Deleting pod "hostpath-symlink-prep-provisioning-867" in namespace "provisioning-867"
Jul 10 08:33:24.129: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-867" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:33:24.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-867" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":35,"skipped":274,"failed":3,"failures":["[sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:24.662: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 108 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":18,"skipped":116,"failed":6,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":17,"skipped":205,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should create endpoints for unready pods","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":21,"skipped":183,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:33:13.648: INFO: >>> kubeConfig: /root/.kube/config
... skipping 12 lines ...
Jul 10 08:33:18.586: INFO: PersistentVolumeClaim pvc-r77cm found but phase is Pending instead of Bound.
Jul 10 08:33:20.743: INFO: PersistentVolumeClaim pvc-r77cm found and phase=Bound (2.313116403s)
Jul 10 08:33:20.743: INFO: Waiting up to 3m0s for PersistentVolume local-zckxs to have phase Bound
Jul 10 08:33:20.899: INFO: PersistentVolume local-zckxs found and phase=Bound (155.509736ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-qnjn
STEP: Creating a pod to test exec-volume-test
Jul 10 08:33:21.369: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-qnjn" in namespace "volume-5981" to be "Succeeded or Failed"
Jul 10 08:33:21.524: INFO: Pod "exec-volume-test-preprovisionedpv-qnjn": Phase="Pending", Reason="", readiness=false. Elapsed: 155.791687ms
Jul 10 08:33:23.681: INFO: Pod "exec-volume-test-preprovisionedpv-qnjn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.312104813s
STEP: Saw pod success
Jul 10 08:33:23.681: INFO: Pod "exec-volume-test-preprovisionedpv-qnjn" satisfied condition "Succeeded or Failed"
Jul 10 08:33:23.839: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-qnjn container exec-container-preprovisionedpv-qnjn: <nil>
STEP: delete the pod
Jul 10 08:33:24.162: INFO: Waiting for pod exec-volume-test-preprovisionedpv-qnjn to disappear
Jul 10 08:33:24.317: INFO: Pod exec-volume-test-preprovisionedpv-qnjn no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-qnjn
Jul 10 08:33:24.317: INFO: Deleting pod "exec-volume-test-preprovisionedpv-qnjn" in namespace "volume-5981"
... skipping 38 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":22,"skipped":183,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:26.349: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:33:26.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3739" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":36,"skipped":283,"failed":3,"failures":["[sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:26.409: INFO: Driver local doesn't support ext4 -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 20 lines ...
STEP: Destroying namespace "services-5669" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":23,"skipped":193,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
STEP: Creating a pod to test emptydir subpath on tmpfs
Jul 10 08:33:24.538: INFO: Waiting up to 5m0s for pod "pod-63ec7e97-285f-43bf-97dc-4554bb1f0f5b" in namespace "emptydir-4643" to be "Succeeded or Failed"
Jul 10 08:33:24.693: INFO: Pod "pod-63ec7e97-285f-43bf-97dc-4554bb1f0f5b": Phase="Pending", Reason="", readiness=false. Elapsed: 155.259196ms
Jul 10 08:33:26.848: INFO: Pod "pod-63ec7e97-285f-43bf-97dc-4554bb1f0f5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309916993s
Jul 10 08:33:29.004: INFO: Pod "pod-63ec7e97-285f-43bf-97dc-4554bb1f0f5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.465759279s
STEP: Saw pod success
Jul 10 08:33:29.004: INFO: Pod "pod-63ec7e97-285f-43bf-97dc-4554bb1f0f5b" satisfied condition "Succeeded or Failed"
Jul 10 08:33:29.158: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-63ec7e97-285f-43bf-97dc-4554bb1f0f5b container test-container: <nil>
STEP: delete the pod
Jul 10 08:33:29.475: INFO: Waiting for pod pod-63ec7e97-285f-43bf-97dc-4554bb1f0f5b to disappear
Jul 10 08:33:29.638: INFO: Pod pod-63ec7e97-285f-43bf-97dc-4554bb1f0f5b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":32,"skipped":192,"failed":4,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
[BeforeEach] [sig-node] crictl
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:33:29.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crictl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 10 08:33:32.012: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ba27005-43c7-4b95-ae55-28c5039ef3bf" in namespace "downward-api-3888" to be "Succeeded or Failed"
Jul 10 08:33:32.167: INFO: Pod "downwardapi-volume-2ba27005-43c7-4b95-ae55-28c5039ef3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 154.954046ms
Jul 10 08:33:34.323: INFO: Pod "downwardapi-volume-2ba27005-43c7-4b95-ae55-28c5039ef3bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.310504242s
STEP: Saw pod success
Jul 10 08:33:34.323: INFO: Pod "downwardapi-volume-2ba27005-43c7-4b95-ae55-28c5039ef3bf" satisfied condition "Succeeded or Failed"
Jul 10 08:33:34.478: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod downwardapi-volume-2ba27005-43c7-4b95-ae55-28c5039ef3bf container client-container: <nil>
STEP: delete the pod
Jul 10 08:33:34.811: INFO: Waiting for pod downwardapi-volume-2ba27005-43c7-4b95-ae55-28c5039ef3bf to disappear
Jul 10 08:33:34.967: INFO: Pod downwardapi-volume-2ba27005-43c7-4b95-ae55-28c5039ef3bf no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:33:34.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3888" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":194,"failed":4,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:35.319: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":24,"skipped":197,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:41.846: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 166 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:673
    should expand volume without restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:688
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":30,"skipped":297,"failed":2,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager."]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:42.101: INFO: Only supported for providers [gce gke] (not aws)
... skipping 163 lines ...
Jul 10 08:33:42.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 10 08:33:43.096: INFO: Waiting up to 5m0s for pod "pod-a703a189-8055-4cad-b5c4-d9a3d358910a" in namespace "emptydir-9332" to be "Succeeded or Failed"
Jul 10 08:33:43.256: INFO: Pod "pod-a703a189-8055-4cad-b5c4-d9a3d358910a": Phase="Pending", Reason="", readiness=false. Elapsed: 160.412936ms
Jul 10 08:33:45.420: INFO: Pod "pod-a703a189-8055-4cad-b5c4-d9a3d358910a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.323922582s
STEP: Saw pod success
Jul 10 08:33:45.420: INFO: Pod "pod-a703a189-8055-4cad-b5c4-d9a3d358910a" satisfied condition "Succeeded or Failed"
Jul 10 08:33:45.580: INFO: Trying to get logs from node ip-172-20-41-208.ap-northeast-2.compute.internal pod pod-a703a189-8055-4cad-b5c4-d9a3d358910a container test-container: <nil>
STEP: delete the pod
Jul 10 08:33:45.907: INFO: Waiting for pod pod-a703a189-8055-4cad-b5c4-d9a3d358910a to disappear
Jul 10 08:33:46.068: INFO: Pod pod-a703a189-8055-4cad-b5c4-d9a3d358910a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:33:46.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9332" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":303,"failed":2,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager."]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:46.436: INFO: Only supported for providers [azure] (not aws)
... skipping 45 lines ...
• [SLOW TEST:5.889 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":25,"skipped":201,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":14,"skipped":94,"failed":4,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:33:42.706: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:360
Jul 10 08:33:43.485: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 10 08:33:43.642: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-fpls
STEP: Creating a pod to test subpath
Jul 10 08:33:43.804: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fpls" in namespace "provisioning-7066" to be "Succeeded or Failed"
Jul 10 08:33:43.960: INFO: Pod "pod-subpath-test-inlinevolume-fpls": Phase="Pending", Reason="", readiness=false. Elapsed: 155.584987ms
Jul 10 08:33:46.116: INFO: Pod "pod-subpath-test-inlinevolume-fpls": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311932993s
Jul 10 08:33:48.273: INFO: Pod "pod-subpath-test-inlinevolume-fpls": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.46895997s
STEP: Saw pod success
Jul 10 08:33:48.273: INFO: Pod "pod-subpath-test-inlinevolume-fpls" satisfied condition "Succeeded or Failed"
Jul 10 08:33:48.429: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-fpls container test-container-subpath-inlinevolume-fpls: <nil>
STEP: delete the pod
Jul 10 08:33:48.773: INFO: Waiting for pod pod-subpath-test-inlinevolume-fpls to disappear
Jul 10 08:33:48.929: INFO: Pod pod-subpath-test-inlinevolume-fpls no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-fpls
Jul 10 08:33:48.929: INFO: Deleting pod "pod-subpath-test-inlinevolume-fpls" in namespace "provisioning-7066"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:360
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":15,"skipped":94,"failed":4,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:33:49.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":16,"skipped":95,"failed":4,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
Jul 10 08:33:51.146: INFO: Creating a PV followed by a PVC
Jul 10 08:33:51.470: INFO: Waiting for PV local-pvzfmmd to bind to PVC pvc-vpj8n
Jul 10 08:33:51.470: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vpj8n] to have phase Bound
Jul 10 08:33:51.631: INFO: PersistentVolumeClaim pvc-vpj8n found and phase=Bound (160.393296ms)
Jul 10 08:33:51.631: INFO: Waiting up to 3m0s for PersistentVolume local-pvzfmmd to have phase Bound
Jul 10 08:33:51.791: INFO: PersistentVolume local-pvzfmmd found and phase=Bound (160.162745ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
Jul 10 08:33:52.273: INFO: Waiting up to 5m0s for pod "pod-6b6c6a0b-e16f-4131-9ca8-51eee9e7858e" in namespace "persistent-local-volumes-test-2311" to be "Unschedulable"
Jul 10 08:33:52.433: INFO: Pod "pod-6b6c6a0b-e16f-4131-9ca8-51eee9e7858e": Phase="Pending", Reason="", readiness=false. Elapsed: 159.947866ms
Jul 10 08:33:52.433: INFO: Pod "pod-6b6c6a0b-e16f-4131-9ca8-51eee9e7858e" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:7.812 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":32,"skipped":312,"failed":2,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager."]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:54.290: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 121 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:673
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:688
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":34,"skipped":255,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","[sig-network] Services should be able to up and down services","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:55.166: INFO: Only supported for providers [gce gke] (not aws)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":44,"skipped":403,"failed":3,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:33:33.246: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
Jul 10 08:33:49.065: INFO: PersistentVolumeClaim pvc-cmb5k found but phase is Pending instead of Bound.
Jul 10 08:33:51.228: INFO: PersistentVolumeClaim pvc-cmb5k found and phase=Bound (13.142303254s)
Jul 10 08:33:51.228: INFO: Waiting up to 3m0s for PersistentVolume local-t6m84 to have phase Bound
Jul 10 08:33:51.390: INFO: PersistentVolume local-t6m84 found and phase=Bound (161.161196ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-z8pf
STEP: Creating a pod to test subpath
Jul 10 08:33:51.883: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-z8pf" in namespace "provisioning-7443" to be "Succeeded or Failed"
Jul 10 08:33:52.044: INFO: Pod "pod-subpath-test-preprovisionedpv-z8pf": Phase="Pending", Reason="", readiness=false. Elapsed: 161.684226ms
Jul 10 08:33:54.208: INFO: Pod "pod-subpath-test-preprovisionedpv-z8pf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.325017572s
STEP: Saw pod success
Jul 10 08:33:54.208: INFO: Pod "pod-subpath-test-preprovisionedpv-z8pf" satisfied condition "Succeeded or Failed"
Jul 10 08:33:54.369: INFO: Trying to get logs from node ip-172-20-35-182.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-z8pf container test-container-volume-preprovisionedpv-z8pf: <nil>
STEP: delete the pod
Jul 10 08:33:54.709: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-z8pf to disappear
Jul 10 08:33:54.871: INFO: Pod pod-subpath-test-preprovisionedpv-z8pf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-z8pf
Jul 10 08:33:54.871: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-z8pf" in namespace "provisioning-7443"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":45,"skipped":403,"failed":3,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:33:58.131: INFO: Only supported for providers [vsphere] (not aws)
... skipping 74 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 75 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:473
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":26,"skipped":205,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:34:05.299: INFO: Only supported for providers [gce gke] (not aws)
... skipping 39 lines ...
Jul 10 08:31:01.452: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod dns-9273/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd: the server is currently unable to handle the request (get pods dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd)
Jul 10 08:31:31.614: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-9273.svc.cluster.local from pod dns-9273/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd: the server is currently unable to handle the request (get pods dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd)
Jul 10 08:32:01.776: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-9273/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd: the server is currently unable to handle the request (get pods dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd)
Jul 10 08:32:31.935: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9273/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd: the server is currently unable to handle the request (get pods dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd)
Jul 10 08:33:02.094: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9273/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd: the server is currently unable to handle the request (get pods dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd)
Jul 10 08:33:32.254: INFO: Unable to read jessie_udp@kubernetes.default from pod dns-9273/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd: the server is currently unable to handle the request (get pods dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd)
Jul 10 08:34:00.809: FAIL: Unable to read jessie_tcp@kubernetes.default from pod dns-9273/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-9273/pods/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd/proxy/results/jessie_tcp@kubernetes.default": context deadline exceeded

Full Stack Trace
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x7904988, 0xc000124010, 0x7fec3464d5b8, 0x18, 0xc0043dcbd0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x7904988, 0xc000124010, 0xc003df04c0, 0x2a27200, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
testing.tRunner(0xc0007f0780, 0x72c1e78)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
E0710 08:34:00.810137   12830 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jul 10 08:34:00.809: Unable to read jessie_tcp@kubernetes.default from pod dns-9273/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd: Get \"https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-9273/pods/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd/proxy/results/jessie_tcp@kubernetes.default\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x7904988, 0xc000124010, 0x7fec3464d5b8, 0x18, 0xc0043dcbd0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x7904988, 0xc000124010, 0xc003df04c0, 0x2a27200, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x7904988, 0xc000124010, 0xc0043dcb01, 0xc0043dcbd0, 0xc003df04c0, 0x684b7c0, 0xc003df04c0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x7904988, 0xc000124010, 0x12a05f200, 0x8bb2c97000, 0xc003df04c0, 0x6d95be0, 0x2535101)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0028a9960, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc0024ecb00, 0x10, 0x10, 0x7058c6e, 0x7, 0xc001cd7400, 0x7997ea8, 0xc0027d8420, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00111cf20, 0xc001cd7400, 0xc0024ecb00, 0x10, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.3()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:107 +0x68f\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc0007f0780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc0007f0780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc0007f0780, 0x72c1e78)\n\t/usr/local/go/src/testing/testing.go:1193 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1238 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6be52e0, 0xc0030c80c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x6be52e0, 0xc0030c80c0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc004d48160, 0x153, 0x88e2b66, 0x7d, 0xd9, 0xc00020b400, 0xa8c)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x6312a40, 0x77bbcc0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc004d48160, 0x153, 0xc00259b698, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc004d48160, 0x153, 0xc00259b780, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x70fc2e0, 0x24, 0xc00259b9e0, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x7904988, 0xc000124010, 0x7fec3464d5b8, 0x18, 0xc0043dcbd0)
... skipping 288 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90

  Jul 10 08:34:00.809: Unable to read jessie_tcp@kubernetes.default from pod dns-9273/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-9273/pods/dns-test-9cd1f3cb-ea7c-4813-b7b5-0cebb26862bd/proxy/results/jessie_tcp@kubernetes.default": context deadline exceeded

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
------------------------------
{"msg":"FAILED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":28,"skipped":180,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:13.265 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:480
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":33,"skipped":313,"failed":2,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager."]}

SSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 99 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317
    should not require VolumeAttach for drivers without attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":18,"skipped":210,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should create endpoints for unready pods","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:34:09.162: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 79 lines ...
Jul 10 08:33:04.216: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul 10 08:33:04.375: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathc7grg] to have phase Bound
Jul 10 08:33:04.532: INFO: PersistentVolumeClaim csi-hostpathc7grg found but phase is Pending instead of Bound.
Jul 10 08:33:06.690: INFO: PersistentVolumeClaim csi-hostpathc7grg found and phase=Bound (2.314995711s)
STEP: Expanding non-expandable pvc
Jul 10 08:33:07.003: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul 10 08:33:07.328: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:09.645: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:11.643: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:13.644: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:15.649: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:17.644: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:19.643: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:21.643: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:23.649: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:25.649: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:27.644: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:29.649: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:31.645: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:33.648: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:35.645: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:37.642: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 10 08:33:37.957: INFO: Error updating pvc csi-hostpathc7grg: persistentvolumeclaims "csi-hostpathc7grg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul 10 08:33:37.957: INFO: Deleting PersistentVolumeClaim "csi-hostpathc7grg"
Jul 10 08:33:38.115: INFO: Waiting up to 5m0s for PersistentVolume pvc-ad04de9f-9470-4f9d-a1f5-6661a742791e to get deleted
Jul 10 08:33:38.271: INFO: PersistentVolume pvc-ad04de9f-9470-4f9d-a1f5-6661a742791e was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-8566
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":16,"skipped":98,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:360
Jul 10 08:34:06.106: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul 10 08:34:06.106: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-78ng
STEP: Creating a pod to test subpath
Jul 10 08:34:06.266: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-78ng" in namespace "provisioning-8726" to be "Succeeded or Failed"
Jul 10 08:34:06.424: INFO: Pod "pod-subpath-test-inlinevolume-78ng": Phase="Pending", Reason="", readiness=false. Elapsed: 157.888046ms
Jul 10 08:34:08.584: INFO: Pod "pod-subpath-test-inlinevolume-78ng": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318162561s
Jul 10 08:34:10.742: INFO: Pod "pod-subpath-test-inlinevolume-78ng": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.475484838s
STEP: Saw pod success
Jul 10 08:34:10.742: INFO: Pod "pod-subpath-test-inlinevolume-78ng" satisfied condition "Succeeded or Failed"
Jul 10 08:34:10.900: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-78ng container test-container-subpath-inlinevolume-78ng: <nil>
STEP: delete the pod
Jul 10 08:34:11.221: INFO: Waiting for pod pod-subpath-test-inlinevolume-78ng to disappear
Jul 10 08:34:11.376: INFO: Pod pod-subpath-test-inlinevolume-78ng no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-78ng
Jul 10 08:34:11.376: INFO: Deleting pod "pod-subpath-test-inlinevolume-78ng" in namespace "provisioning-8726"
... skipping 38 lines ...
Jul 10 08:34:04.531: INFO: PersistentVolumeClaim pvc-5k99l found but phase is Pending instead of Bound.
Jul 10 08:34:06.693: INFO: PersistentVolumeClaim pvc-5k99l found and phase=Bound (6.648761577s)
Jul 10 08:34:06.693: INFO: Waiting up to 3m0s for PersistentVolume local-47zrt to have phase Bound
Jul 10 08:34:06.855: INFO: PersistentVolume local-47zrt found and phase=Bound (161.399076ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-5ghj
STEP: Creating a pod to test subpath
Jul 10 08:34:07.341: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5ghj" in namespace "provisioning-4156" to be "Succeeded or Failed"
Jul 10 08:34:07.503: INFO: Pod "pod-subpath-test-preprovisionedpv-5ghj": Phase="Pending", Reason="", readiness=false. Elapsed: 161.845346ms
Jul 10 08:34:09.665: INFO: Pod "pod-subpath-test-preprovisionedpv-5ghj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.324001342s
STEP: Saw pod success
Jul 10 08:34:09.665: INFO: Pod "pod-subpath-test-preprovisionedpv-5ghj" satisfied condition "Succeeded or Failed"
Jul 10 08:34:09.827: INFO: Trying to get logs from node ip-172-20-37-88.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-5ghj container test-container-volume-preprovisionedpv-5ghj: <nil>
STEP: delete the pod
Jul 10 08:34:10.168: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5ghj to disappear
Jul 10 08:34:10.329: INFO: Pod pod-subpath-test-preprovisionedpv-5ghj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-5ghj
Jul 10 08:34:10.329: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5ghj" in namespace "provisioning-4156"
... skipping 54 lines ...
• [SLOW TEST:7.376 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":34,"skipped":317,"failed":2,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager."]}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":46,"skipped":414,"failed":3,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:34:17.360: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 255 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:34:19.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-4472" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":35,"skipped":318,"failed":2,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager."]}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:34:19.622: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 33 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":19,"skipped":214,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should create endpoints for unready pods","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 10 08:34:33.648: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:39
W0710 08:33:26.386188   13024 metrics_grabber.go:127] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
W0710 08:33:26.721595   13024 metrics_grabber.go:151] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
[It] should grab all metrics from a Scheduler.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:74
STEP: Proxying to Pod through the API server
Jul 10 08:34:29.252: FAIL: Unexpected error:
    <*errors.errorString | 0xc00164c0f0>: {
        s: "error waiting for scheduler pod to expose metrics: timed out waiting for the condition; unknown",
    }
    error waiting for scheduler pod to expose metrics: timed out waiting for the condition; unknown
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/instrumentation/monitoring.glob..func3.4()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:80 +0x166
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0001a8480)
... skipping 241 lines ...
• Failure [70.332 seconds]
[sig-instrumentation] MetricsGrabber
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23
  should grab all metrics from a Scheduler. [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:74

  Jul 10 08:34:29.252: Unexpected error:
      <*errors.errorString | 0xc00164c0f0>: {
          s: "error waiting for scheduler pod to expose metrics: timed out waiting for the condition; unknown",
      }
      error waiting for scheduler pod to expose metrics: timed out waiting for the condition; unknown
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:80
------------------------------
{"msg":"FAILED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":18,"skipped":118,"failed":7,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler."]}
Jul 10 08:34:35.734: INFO: Running AfterSuite actions on all nodes
Jul 10 08:34:35.734: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:34:35.734: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:34:35.734: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:34:35.734: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:34:35.734: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul 10 08:34:35.735: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul 10 08:34:35.735: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":35,"skipped":257,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","[sig-network] Services should be able to up and down services","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:34:12.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-projected-dl29
STEP: Creating a pod to test atomic-volume-subpath
Jul 10 08:34:13.856: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dl29" in namespace "subpath-2243" to be "Succeeded or Failed"
Jul 10 08:34:14.018: INFO: Pod "pod-subpath-test-projected-dl29": Phase="Pending", Reason="", readiness=false. Elapsed: 162.384636ms
Jul 10 08:34:16.181: INFO: Pod "pod-subpath-test-projected-dl29": Phase="Running", Reason="", readiness=true. Elapsed: 2.324807213s
Jul 10 08:34:18.343: INFO: Pod "pod-subpath-test-projected-dl29": Phase="Running", Reason="", readiness=true. Elapsed: 4.48748602s
Jul 10 08:34:20.506: INFO: Pod "pod-subpath-test-projected-dl29": Phase="Running", Reason="", readiness=true. Elapsed: 6.650336868s
Jul 10 08:34:22.673: INFO: Pod "pod-subpath-test-projected-dl29": Phase="Running", Reason="", readiness=true. Elapsed: 8.817131906s
Jul 10 08:34:24.836: INFO: Pod "pod-subpath-test-projected-dl29": Phase="Running", Reason="", readiness=true. Elapsed: 10.979699723s
Jul 10 08:34:27.002: INFO: Pod "pod-subpath-test-projected-dl29": Phase="Running", Reason="", readiness=true. Elapsed: 13.146255169s
Jul 10 08:34:29.171: INFO: Pod "pod-subpath-test-projected-dl29": Phase="Running", Reason="", readiness=true. Elapsed: 15.315279447s
Jul 10 08:34:31.333: INFO: Pod "pod-subpath-test-projected-dl29": Phase="Running", Reason="", readiness=true. Elapsed: 17.477331894s
Jul 10 08:34:33.496: INFO: Pod "pod-subpath-test-projected-dl29": Phase="Running", Reason="", readiness=true. Elapsed: 19.639674971s
Jul 10 08:34:35.659: INFO: Pod "pod-subpath-test-projected-dl29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.803200286s
STEP: Saw pod success
Jul 10 08:34:35.659: INFO: Pod "pod-subpath-test-projected-dl29" satisfied condition "Succeeded or Failed"
Jul 10 08:34:35.821: INFO: Trying to get logs from node ip-172-20-49-206.ap-northeast-2.compute.internal pod pod-subpath-test-projected-dl29 container test-container-subpath-projected-dl29: <nil>
STEP: delete the pod
Jul 10 08:34:36.152: INFO: Waiting for pod pod-subpath-test-projected-dl29 to disappear
Jul 10 08:34:36.313: INFO: Pod pod-subpath-test-projected-dl29 no longer exists
STEP: Deleting pod pod-subpath-test-projected-dl29
Jul 10 08:34:36.313: INFO: Deleting pod "pod-subpath-test-projected-dl29" in namespace "subpath-2243"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":36,"skipped":257,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","[sig-network] Services should be able to up and down services","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
Jul 10 08:34:36.810: INFO: Running AfterSuite actions on all nodes
Jul 10 08:34:36.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:34:36.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:34:36.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:34:36.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:34:36.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul 10 08:34:36.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul 10 08:34:36.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":27,"skipped":210,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:34:12.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
Jul 10 08:34:12.804: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 10 08:34:13.121: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1641" in namespace "provisioning-1641" to be "Succeeded or Failed"
Jul 10 08:34:13.277: INFO: Pod "hostpath-symlink-prep-provisioning-1641": Phase="Pending", Reason="", readiness=false. Elapsed: 155.956456ms
Jul 10 08:34:15.434: INFO: Pod "hostpath-symlink-prep-provisioning-1641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.313444684s
STEP: Saw pod success
Jul 10 08:34:15.434: INFO: Pod "hostpath-symlink-prep-provisioning-1641" satisfied condition "Succeeded or Failed"
Jul 10 08:34:15.434: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1641" in namespace "provisioning-1641"
Jul 10 08:34:15.597: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1641" to be fully deleted
Jul 10 08:34:15.753: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-7rf6
Jul 10 08:34:18.224: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-1641 exec pod-subpath-test-inlinevolume-7rf6 --container test-container-volume-inlinevolume-7rf6 -- /bin/sh -c rm -r /test-volume/provisioning-1641'
Jul 10 08:34:19.793: INFO: stderr: ""
Jul 10 08:34:19.793: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-7rf6
Jul 10 08:34:19.793: INFO: Deleting pod "pod-subpath-test-inlinevolume-7rf6" in namespace "provisioning-1641"
Jul 10 08:34:19.950: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-7rf6" to be fully deleted
STEP: Deleting pod
Jul 10 08:34:34.264: INFO: Deleting pod "pod-subpath-test-inlinevolume-7rf6" in namespace "provisioning-1641"
Jul 10 08:34:34.576: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1641" in namespace "provisioning-1641" to be "Succeeded or Failed"
Jul 10 08:34:34.734: INFO: Pod "hostpath-symlink-prep-provisioning-1641": Phase="Pending", Reason="", readiness=false. Elapsed: 157.364216ms
Jul 10 08:34:36.890: INFO: Pod "hostpath-symlink-prep-provisioning-1641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.313731303s
STEP: Saw pod success
Jul 10 08:34:36.890: INFO: Pod "hostpath-symlink-prep-provisioning-1641" satisfied condition "Succeeded or Failed"
Jul 10 08:34:36.890: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1641" in namespace "provisioning-1641"
Jul 10 08:34:37.052: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1641" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:34:37.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-1641" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":28,"skipped":210,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}
Jul 10 08:34:37.534: INFO: Running AfterSuite actions on all nodes
Jul 10 08:34:37.534: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:34:37.534: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:34:37.534: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:34:37.534: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:34:37.534: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 25 lines ...
• [SLOW TEST:30.371 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":17,"skipped":99,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}
Jul 10 08:34:41.898: INFO: Running AfterSuite actions on all nodes
Jul 10 08:34:41.898: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:34:41.898: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:34:41.898: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:34:41.898: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:34:41.898: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 25 lines ...
• [SLOW TEST:244.368 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a non-local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:295
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":16,"skipped":113,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
Jul 10 08:35:26.214: INFO: Running AfterSuite actions on all nodes
Jul 10 08:35:26.214: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:35:26.214: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:35:26.214: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:35:26.214: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:35:26.214: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 18 lines ...
Jul 10 08:34:14.836: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 10 08:34:16.836: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 10 08:34:18.836: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761502849, loc:(*time.Location)(0xa085940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 10 08:35:21.649: INFO: Waited 1m0.647707411s for the sample-apiserver to be ready to handle requests.
Jul 10 08:35:21.649: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"3b19f50f-f966-4334-8368-258285f1a1f7","resourceVersion":"47468","creationTimestamp":"2021-07-10T08:34:20Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2021-07-10T08:34:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2021-07-10T08:34:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"}]},"spec":{"service":{"namespace":"aggregator-9189","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpFd056RXdNRGd6TkRBNFdoY05NekV3TnpBNE1EZ3pOREE0V2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURIMWpsMkpTdTlxZ3FYVHVFZlo4UnZ3cjhmZERJeVpyS0NHbktJNWErK3RMaXEKZi9SR2VuOVlqbFo2WmZkZ2VTdE5EamowbmN6elRjMDkvdlZqWExFeFIyQy85L1FFRTdzQ0FaVGRVaUhvNWdkTwpraHQ2U1ZudVpiM1hxL1NLejdaQXdiZVRITFllcjRzL012a1FYbExFYldKTkJMUWtpdGhzMFJtK3g4b2k3YThrCm5zZHVoblhkNlMraFdKK09SaXZ1aVlsV0RJVGJKYUVSdVl2Q2MxTi9mK3NYSmJPVU0vTmRkN0NCQ2EwcUdaOFkKb2NJU3c4Vi83ZThXL3kyUEUzSWdWdm1GcXV2blRkQXNxcHdGS0lRNTltcVZkTzZQTmk1NGhKcmpza0NYWGllVwpLRGVvdVhoQ0ZucUNEVTlhWm9kMERlRGhkeXNwb042WmgvRk5TTVZsQWdNQkFBR2pZVEJmTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYmdZOWo1aDVGVXQwNnd1VVYKdEVCR2xlVTJVekFkQmdOVkhSRUVGakFVZ2hKbE1tVXRjMlZ5ZG1WeUxXTmxjblF0WTJFd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBSVFSa0Z1aUV0YkU1TENaNU00bVlRYjJXNlczWVAxV2hTa3BLQ3JKTEJFUjZhK2lMR0dCClgzaDJCNDhHRXFQemdpTTJwQ3pGR29XZzJBVkJEZnlDTGt1K1lZVFFyejQ2L1VhMXNvLzVOeGFzTEZHTFVVV2wKRFV0M2s5MkNNZTVoZEw1SDZ4Q00xQ2wwS1h0eGhCM3ZJQ3NDVkdacW8xbDF5VnNaeWJhdE5ycFAxanE5VEVVaQpWNVZLRytJdW1DTDVNSVNmdjlteGlLdUF3QXdvdVJFVmRGZjVXWjNEMTJ0R1J1VGpFeXArcWFnQ2JnNDFsUjZECmoyZ3E1dTJ5MGdIcnZaSmk4L3V5ZGYyQVNHMDZ2Q0hvYUxTTkY5NjBSMWVtVkJHV2RjdDlCTGdFbnZuSUEvOGMKMGVCSHdCUklyNVVvL09uU1hLQ0NUTlVTRi9UanBtWk8zS1E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2021-07-10T08:34:20Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://100.68.239.176:7443/apis/wardle.example.com/v1alpha1: Get \"https://100.68.239.176:7443/apis/wardle.example.com/v1alpha1\": dial tcp 100.68.239.176:7443: i/o timeout"}]}}
Jul 10 08:35:21.651: INFO: current pods: {"metadata":{"resourceVersion":"47579"},"items":[{"metadata":{"name":"sample-apiserver-deployment-64f6b9dc99-dlr89","generateName":"sample-apiserver-deployment-64f6b9dc99-","namespace":"aggregator-9189","uid":"6f31e1a0-ceea-4e0f-a231-b27a61778c5d","resourceVersion":"47176","creationTimestamp":"2021-07-10T08:34:09Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"64f6b9dc99"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-64f6b9dc99","uid":"d0c08fe2-8580-4564-b47c-9198f713b642","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-07-10T08:34:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0c08fe2-8580-4564-b47c-9198f713b642\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-07-10T08:34:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.110\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-lgk8z","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-lgk8z","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.13-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-lgk8z","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-49-206.ap-northeast-2.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-10T08:34:09Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-10T08:34:20Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-10T08:34:20Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-10T08:34:09Z"}],"hostIP":"172.20.49.206","podIP":"100.96.4.110","podIPs":[{"ip":"100.96.4.110"}],"startTime":"2021-07-10T08:34:09Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2021-07-10T08:34:19Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.13-0","imageID":"k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2","containerID":"containerd://e97f3b27919f3ca61ea39ed4c31eb5c1778863a786d2914c2b1641c394785095","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2021-07-10T08:34:13Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276","containerID":"containerd://b659519416c78194f3339dddc52d3557d9574136b41806fe6a23b9ae18254e54","started":true}],"qosClass":"BestEffort"}}]}
Jul 10 08:35:21.820: INFO: logs of sample-apiserver-deployment-64f6b9dc99-dlr89/sample-apiserver (error: <nil>): W0710 08:34:13.770625       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0710 08:34:13.770710       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
I0710 08:34:13.795366       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
I0710 08:34:13.795385       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
I0710 08:34:13.797860       1 client.go:361] parsed scheme: "endpoint"
I0710 08:34:13.797885       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0710 08:34:13.800162       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0710 08:34:14.109031       1 client.go:361] parsed scheme: "endpoint"
I0710 08:34:14.109113       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0710 08:34:14.109954       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0710 08:34:14.800502       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0710 08:34:15.110255       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0710 08:34:16.188657       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0710 08:34:16.988944       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0710 08:34:18.664227       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0710 08:34:22.583279       1 client.go:361] parsed scheme: "endpoint"
I0710 08:34:22.583311       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0710 08:34:22.584512       1 client.go:361] parsed scheme: "endpoint"
I0710 08:34:22.584534       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0710 08:34:22.586301       1 client.go:361] parsed scheme: "endpoint"
I0710 08:34:22.586371       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
... skipping 4 lines ...
I0710 08:34:22.627230       1 secure_serving.go:178] Serving securely on [::]:443
I0710 08:34:22.627660       1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key
I0710 08:34:22.627787       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0710 08:34:22.726720       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0710 08:34:22.726961       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

Jul 10 08:35:21.980: INFO: logs of sample-apiserver-deployment-64f6b9dc99-dlr89/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2021-07-10 08:34:19.256114 I | etcdmain: etcd Version: 3.4.13
2021-07-10 08:34:19.256151 I | etcdmain: Git SHA: ae9734ed2
2021-07-10 08:34:19.256155 I | etcdmain: Go Version: go1.12.17
2021-07-10 08:34:19.256159 I | etcdmain: Go OS/Arch: linux/amd64
2021-07-10 08:34:19.256163 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2021-07-10 08:34:19.256170 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
2021-07-10 08:34:19.967678 I | etcdserver: setting up the initial cluster version to 3.4
2021-07-10 08:34:19.967756 I | embed: ready to serve client requests
2021-07-10 08:34:19.968966 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
2021-07-10 08:34:19.970087 N | etcdserver/membership: set the initial cluster version to 3.4
2021-07-10 08:34:19.970181 I | etcdserver/api: enabled capabilities for version 3.4

Jul 10 08:35:21.980: FAIL: gave up waiting for apiservice wardle to come up successfully
Unexpected error:
    <*errors.errorString | 0xc000240240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 248 lines ...
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 10 08:35:21.980: gave up waiting for apiservice wardle to come up successfully
  Unexpected error:
      <*errors.errorString | 0xc000240240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:406
------------------------------
{"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":28,"skipped":181,"failed":4,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
Jul 10 08:35:29.731: INFO: Running AfterSuite actions on all nodes
Jul 10 08:35:29.731: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:35:29.731: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:35:29.731: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:35:29.731: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:35:29.731: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 121 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Jul 10 08:30:25.489: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9244 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Jul 10 08:30:27.261: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Jul 10 08:30:27.261: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9244 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Jul 10 08:30:29.033: INFO: rc: 255
Jul 10 08:30:29.033: INFO: got err error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9244 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0710 08:30:28.820599     198 merged_client_builder.go:163] Using in-cluster namespace
I0710 08:30:28.820784     198 merged_client_builder.go:121] Using in-cluster configuration
I0710 08:30:28.823318     198 merged_client_builder.go:121] Using in-cluster configuration
I0710 08:30:28.826421     198 merged_client_builder.go:121] Using in-cluster configuration
I0710 08:30:28.826864     198 round_trippers.go:432] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-9244/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0710 08:30:28.833778     198 helpers.go:116] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc00052a000, 0x68, 0x1af)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x30f1420, 0xc000000003, 0x0, 0x0, 0xc0006360e0, 0x2, 0x28132d1, 0xa, 0x74, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x30f1420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0005fff70, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0002d1b40, 0x3a, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2287a60, 0xc00011bf50, 0x2108eb0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:178 +0x8a3
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc0001d5b80, 0xc000827410, 0x1, 0x3)
... skipping 66 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:695 +0x6c5

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Jul 10 08:30:29.033: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9244 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Jul 10 08:33:00.681: INFO: rc: 255
Jul 10 08:33:00.681: INFO: got err error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9244 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0710 08:30:30.582295     212 merged_client_builder.go:163] Using in-cluster namespace
I0710 08:31:00.583512     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30000 milliseconds
I0710 08:31:00.583654     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0710 08:31:30.584708     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30000 milliseconds
I0710 08:31:30.584789     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0710 08:31:30.584810     212 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0710 08:32:00.585206     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30000 milliseconds
I0710 08:32:00.585444     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0710 08:32:30.586145     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30000 milliseconds
I0710 08:32:30.586214     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0710 08:33:00.587963     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30001 milliseconds
I0710 08:33:00.588045     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0710 08:33:00.588113     212 helpers.go:235] Connection error: Get http://invalid/api?timeout=32s: dial tcp: i/o timeout
F0710 08:33:00.588135     212 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc0004650e0, 0x65, 0x9a)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x30f1420, 0xc000000003, 0x0, 0x0, 0xc0004bcd90, 0x2, 0x28132d1, 0xa, 0x74, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x30f1420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0005aa280, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc000352600, 0x36, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2286da0, 0xc000634600, 0x2108eb0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc000571900, 0xc0005088a0, 0x1, 0x3)
... skipping 88 lines ...
	/usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Jul 10 08:33:00.681: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9244 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Jul 10 08:33:02.404: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Jul 10 08:33:02.404: INFO: stdout: "I0710 08:33:02.292069     224 merged_client_builder.go:121] Using in-cluster configuration\nI0710 08:33:02.297166     224 merged_client_builder.go:121] Using in-cluster configuration\nI0710 08:33:02.300708     224 merged_client_builder.go:121] Using in-cluster configuration\nI0710 08:33:02.313517     224 round_trippers.go:454] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 12 milliseconds\nNo resources found in invalid namespace.\n"
Jul 10 08:33:02.404: INFO: stdout: I0710 08:33:02.292069     224 merged_client_builder.go:121] Using in-cluster configuration
... skipping 7 lines ...
Jul 10 08:35:34.074: INFO: rc: 255
Jul 10 08:35:34.074: INFO: stdout: I0710 08:33:03.980062     235 loader.go:372] Config loaded from file:  /tmp/icc-override.kubeconfig
I0710 08:33:33.982858     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30001 milliseconds
I0710 08:33:33.982946     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0710 08:34:03.984139     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
I0710 08:34:03.984210     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0710 08:34:03.984228     235 shortcut.go:89] Error loading discovery information: Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0710 08:34:33.984820     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
I0710 08:34:33.984893     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0710 08:35:03.985637     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
I0710 08:35:03.985713     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0710 08:35:33.986522     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
I0710 08:35:33.986600     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0710 08:35:33.986663     235 helpers.go:235] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: i/o timeout
F0710 08:35:33.986686     235 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc0003c5a40, 0x65, 0xb7)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x30f1420, 0xc000000003, 0x0, 0x0, 0xc0001724d0, 0x2, 0x28132d1, 0xa, 0x74, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x30f1420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0003ec3a0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000a0500, 0x36, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2286da0, 0xc0003866f0, 0x2108eb0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc00052b900, 0xc0007e6960, 0x1, 0x3)
... skipping 84 lines ...
	/usr/local/go/src/net/lookup.go:293 +0xba
internal/singleflight.(*Group).doCall(0x30efbf0, 0xc000330410, 0xc0003db140, 0x1a, 0xc000135a00)
	/usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
created by internal/singleflight.(*Group).DoChan
	/usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc

Jul 10 08:35:34.075: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9244 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1:\nCommand stdout:\nI0710 08:33:03.980062     235 loader.go:372] Config loaded from file:  /tmp/icc-override.kubeconfig\nI0710 08:33:33.982858     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30001 milliseconds\nI0710 08:33:33.982946     235 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:34:03.984139     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0710 08:34:03.984210     235 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:34:03.984228     235 shortcut.go:89] Error loading discovery information: Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:34:33.984820     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0710 08:34:33.984893     235 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:35:03.985637     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0710 08:35:03.985713     235 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:35:33.986522     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0710 08:35:33.986600     235 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:35:33.986663     235 helpers.go:235] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: i/o timeout\nF0710 08:35:33.986686     235 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout\ngoroutine 1 [running]:\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc0003c5a40, 0x65, 0xb7)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x30f1420, 0xc000000003, 0x0, 0x0, 0xc0001724d0, 0x2, 0x28132d1, 0xa, 0x74, 0x40e300)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x30f1420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0003ec3a0, 0x1, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000a0500, 0x36, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2286da0, 0xc0003866f0, 0x2108eb0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc00052b900, 0xc0007e6960, 0x1, 0x3)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:180 +0x159\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00052b900, 0xc0007e6930, 0x3, 0x3, 0xc00052b900, 0xc0007e6930)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856 +0x2c2\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0001b1680, 0xc000130120, 0xc000136000, 0x5)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960 +0x375\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897\nmain.main()\n\t_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d\n\ngoroutine 18 [chan receive]:\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x30f1420)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b\ncreated by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:420 +0xdf\n\ngoroutine 20 [select]:\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x2108db8, 0x2285260, 0xc0003e0000, 0x1, 0xc000100b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x2108db8, 0x12a05f200, 0x0, 0x1, 0xc000100b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x2108db8, 0x12a05f200, 0xc000100b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d\ncreated by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96\n\ngoroutine 130 [IO wait]:\ninternal/poll.runtime_pollWait(0x7f42bef1b2b0, 0x72, 0xffffffffffffffff)\n\t/usr/local/go/src/runtime/netpoll.go:222 +0x55\ninternal/poll.(*pollDesc).wait(0xc0000ac798, 0x72, 0x200, 0x200, 0xffffffffffffffff)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45\ninternal/poll.(*pollDesc).waitRead(...)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:92\ninternal/poll.(*FD).Read(0xc0000ac780, 0xc0000da800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5\nnet.(*netFD).Read(0xc0000ac780, 0xc0000da800, 0x200, 0x200, 0x0, 0xc0008463e0, 0x450c8c)\n\t/usr/local/go/src/net/fd_posix.go:55 +0x4f\nnet.(*conn).Read(0xc00044a028, 0xc0000da800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/net.go:183 +0x91\nnet.dnsPacketRoundTrip(0x22d2500, 0xc00044a028, 0x6e726562756bdba2, 0x6665642e73657465, 0x6376732e746c7561, 0x72657473756c632e, 0x2e6c61636f6c2e, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:86 +0x135\nnet.(*Resolver).exchange(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0003e8fb0, 0xe, 0x74656e726562756b, 0x75616665642e7365, 0x632e6376732e746c, 0x6c2e72657473756c, 0x2e6c61636f, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:165 +0x4a8\nnet.(*Resolver).tryOneName(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0007f06e0, 0xc0002dba10, 0x25, 0x1, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:255 +0x347\nnet.(*Resolver).goLookupIPCNAMEOrder.func3.1(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0007f06e0, 0xc0002dba10, 0x25, 0xc0002c0540, 0xc000120001)\n\t/usr/local/go/src/net/dnsclient_unix.go:601 +0xbb\ncreated by net.(*Resolver).goLookupIPCNAMEOrder.func3\n\t/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8\n\ngoroutine 131 [IO wait]:\ninternal/poll.runtime_pollWait(0x7f42bef1b398, 0x72, 0xffffffffffffffff)\n\t/usr/local/go/src/runtime/netpoll.go:222 +0x55\ninternal/poll.(*pollDesc).wait(0xc0007f8c98, 0x72, 0x200, 0x200, 0xffffffffffffffff)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45\ninternal/poll.(*pollDesc).waitRead(...)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:92\ninternal/poll.(*FD).Read(0xc0007f8c80, 0xc000856800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5\nnet.(*netFD).Read(0xc0007f8c80, 0xc000856800, 0x200, 0x200, 0x0, 0xc0008783e0, 0x450c8c)\n\t/usr/local/go/src/net/fd_posix.go:55 +0x4f\nnet.(*conn).Read(0xc00000e030, 0xc000856800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/net.go:183 +0x91\nnet.dnsPacketRoundTrip(0x22d2500, 0xc00000e030, 0x6e726562756be1aa, 0x6665642e73657465, 0x6376732e746c7561, 0x72657473756c632e, 0x2e6c61636f6c2e, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:86 +0x135\nnet.(*Resolver).exchange(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0003e8fb0, 0xe, 0x74656e726562756b, 0x75616665642e7365, 0x632e6376732e746c, 0x6c2e72657473756c, 0x2e6c61636f, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:165 +0x4a8\nnet.(*Resolver).tryOneName(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0007f06e0, 0xc0002dba10, 0x25, 0x1c, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:255 +0x347\nnet.(*Resolver).goLookupIPCNAMEOrder.func3.1(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0007f06e0, 0xc0002dba10, 0x25, 0xc0002c0540, 0xc00012001c)\n\t/usr/local/go/src/net/dnsclient_unix.go:601 +0xbb\ncreated by net.(*Resolver).goLookupIPCNAMEOrder.func3\n\t/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8\n\ngoroutine 96 [chan receive]:\nnet.(*Resolver).goLookupIPCNAMEOrder.func4(0xc0002dba10, 0x25, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:607 +0xab\nnet.(*Resolver).goLookupIPCNAMEOrder(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0003db0e0, 0x16, 0x1, 0x0, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:617 +0x806\nnet.(*Resolver).lookupIP(0x30efbe0, 0x22be100, 0xc0001359c0, 0x1e6b341, 0x3, 0xc0003db0e0, 0x16, 0x2, 0x3, 0x0, ...)\n\t/usr/local/go/src/net/lookup_unix.go:102 +0xe5\nnet.glob..func1(0x22be100, 0xc0001359c0, 0xc0003eca80, 0x1e6b341, 0x3, 0xc0003db0e0, 0x16, 0xc0002c0180, 0x0, 0xc000101500, ...)\n\t/usr/local/go/src/net/hook.go:23 +0x72\nnet.(*Resolver).lookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/lookup.go:293 +0xba\ninternal/singleflight.(*Group).doCall(0x30efbf0, 0xc000330410, 0xc0003db140, 0x1a, 0xc000135a00)\n\t/usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e\ncreated by internal/singleflight.(*Group).DoChan\n\t/usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc\n\nstderr:\n+ /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'\ncommand terminated with exit code 255\n\nerror:\nexit status 255",
        },
        Code: 255,
    }
    error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9244 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1:
    Command stdout:
    I0710 08:33:03.980062     235 loader.go:372] Config loaded from file:  /tmp/icc-override.kubeconfig
    I0710 08:33:33.982858     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30001 milliseconds
    I0710 08:33:33.982946     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0710 08:34:03.984139     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
    I0710 08:34:03.984210     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0710 08:34:03.984228     235 shortcut.go:89] Error loading discovery information: Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0710 08:34:33.984820     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
    I0710 08:34:33.984893     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0710 08:35:03.985637     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
    I0710 08:35:03.985713     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0710 08:35:33.986522     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
    I0710 08:35:33.986600     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0710 08:35:33.986663     235 helpers.go:235] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: i/o timeout
    F0710 08:35:33.986686     235 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout
    goroutine 1 [running]:
    k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc0003c5a40, 0x65, 0xb7)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
    k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x30f1420, 0xc000000003, 0x0, 0x0, 0xc0001724d0, 0x2, 0x28132d1, 0xa, 0x74, 0x40e300)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
    k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x30f1420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0003ec3a0, 0x1, 0x1)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
    k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
    k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000a0500, 0x36, 0x1)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
    k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2286da0, 0xc0003866f0, 0x2108eb0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935
    k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
    k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc00052b900, 0xc0007e6960, 0x1, 0x3)
... skipping 88 lines ...
    	/usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
    
    stderr:
    + /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'
    command terminated with exit code 255
    
    error:
    exit status 255
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.RunHostCmdOrDie(0xc001139640, 0xc, 0x705390b, 0x5, 0xc003526050, 0x4a, 0xb, 0xc003526050)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1102 +0x225
... skipping 246 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should handle in-cluster config [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:646

    Jul 10 08:35:34.076: Unexpected error:
        <exec.CodeExitError>: {
            Err: {
                s: "error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9244 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1:\nCommand stdout:\nI0710 08:33:03.980062     235 loader.go:372] Config loaded from file:  /tmp/icc-override.kubeconfig\nI0710 08:33:33.982858     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30001 milliseconds\nI0710 08:33:33.982946     235 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:34:03.984139     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0710 08:34:03.984210     235 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:34:03.984228     235 shortcut.go:89] Error loading discovery information: Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:34:33.984820     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0710 08:34:33.984893     235 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:35:03.985637     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0710 08:35:03.985713     235 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:35:33.986522     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0710 08:35:33.986600     235 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0710 08:35:33.986663     235 helpers.go:235] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: i/o timeout\nF0710 08:35:33.986686     235 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout\ngoroutine 1 [running]:\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc0003c5a40, 0x65, 0xb7)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x30f1420, 0xc000000003, 0x0, 0x0, 0xc0001724d0, 0x2, 0x28132d1, 0xa, 0x74, 0x40e300)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x30f1420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0003ec3a0, 0x1, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000a0500, 0x36, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2286da0, 0xc0003866f0, 0x2108eb0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc00052b900, 0xc0007e6960, 0x1, 0x3)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:180 +0x159\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00052b900, 0xc0007e6930, 0x3, 0x3, 0xc00052b900, 0xc0007e6930)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856 +0x2c2\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0001b1680, 0xc000130120, 0xc000136000, 0x5)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960 +0x375\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897\nmain.main()\n\t_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d\n\ngoroutine 18 [chan receive]:\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x30f1420)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b\ncreated by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:420 +0xdf\n\ngoroutine 20 [select]:\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x2108db8, 0x2285260, 0xc0003e0000, 0x1, 0xc000100b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x2108db8, 0x12a05f200, 0x0, 0x1, 0xc000100b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x2108db8, 0x12a05f200, 0xc000100b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d\ncreated by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96\n\ngoroutine 130 [IO wait]:\ninternal/poll.runtime_pollWait(0x7f42bef1b2b0, 0x72, 0xffffffffffffffff)\n\t/usr/local/go/src/runtime/netpoll.go:222 +0x55\ninternal/poll.(*pollDesc).wait(0xc0000ac798, 0x72, 0x200, 0x200, 0xffffffffffffffff)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45\ninternal/poll.(*pollDesc).waitRead(...)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:92\ninternal/poll.(*FD).Read(0xc0000ac780, 0xc0000da800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5\nnet.(*netFD).Read(0xc0000ac780, 0xc0000da800, 0x200, 0x200, 0x0, 0xc0008463e0, 0x450c8c)\n\t/usr/local/go/src/net/fd_posix.go:55 +0x4f\nnet.(*conn).Read(0xc00044a028, 0xc0000da800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/net.go:183 +0x91\nnet.dnsPacketRoundTrip(0x22d2500, 0xc00044a028, 0x6e726562756bdba2, 0x6665642e73657465, 0x6376732e746c7561, 0x72657473756c632e, 0x2e6c61636f6c2e, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:86 +0x135\nnet.(*Resolver).exchange(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0003e8fb0, 0xe, 0x74656e726562756b, 0x75616665642e7365, 0x632e6376732e746c, 0x6c2e72657473756c, 0x2e6c61636f, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:165 +0x4a8\nnet.(*Resolver).tryOneName(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0007f06e0, 0xc0002dba10, 0x25, 0x1, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:255 +0x347\nnet.(*Resolver).goLookupIPCNAMEOrder.func3.1(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0007f06e0, 0xc0002dba10, 0x25, 0xc0002c0540, 0xc000120001)\n\t/usr/local/go/src/net/dnsclient_unix.go:601 +0xbb\ncreated by net.(*Resolver).goLookupIPCNAMEOrder.func3\n\t/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8\n\ngoroutine 131 [IO wait]:\ninternal/poll.runtime_pollWait(0x7f42bef1b398, 0x72, 0xffffffffffffffff)\n\t/usr/local/go/src/runtime/netpoll.go:222 +0x55\ninternal/poll.(*pollDesc).wait(0xc0007f8c98, 0x72, 0x200, 0x200, 0xffffffffffffffff)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45\ninternal/poll.(*pollDesc).waitRead(...)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:92\ninternal/poll.(*FD).Read(0xc0007f8c80, 0xc000856800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5\nnet.(*netFD).Read(0xc0007f8c80, 0xc000856800, 0x200, 0x200, 0x0, 0xc0008783e0, 0x450c8c)\n\t/usr/local/go/src/net/fd_posix.go:55 +0x4f\nnet.(*conn).Read(0xc00000e030, 0xc000856800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/net.go:183 +0x91\nnet.dnsPacketRoundTrip(0x22d2500, 0xc00000e030, 0x6e726562756be1aa, 0x6665642e73657465, 0x6376732e746c7561, 0x72657473756c632e, 0x2e6c61636f6c2e, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:86 +0x135\nnet.(*Resolver).exchange(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0003e8fb0, 0xe, 0x74656e726562756b, 0x75616665642e7365, 0x632e6376732e746c, 0x6c2e72657473756c, 0x2e6c61636f, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:165 +0x4a8\nnet.(*Resolver).tryOneName(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0007f06e0, 0xc0002dba10, 0x25, 0x1c, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:255 +0x347\nnet.(*Resolver).goLookupIPCNAMEOrder.func3.1(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0007f06e0, 0xc0002dba10, 0x25, 0xc0002c0540, 0xc00012001c)\n\t/usr/local/go/src/net/dnsclient_unix.go:601 +0xbb\ncreated by net.(*Resolver).goLookupIPCNAMEOrder.func3\n\t/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8\n\ngoroutine 96 [chan receive]:\nnet.(*Resolver).goLookupIPCNAMEOrder.func4(0xc0002dba10, 0x25, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:607 +0xab\nnet.(*Resolver).goLookupIPCNAMEOrder(0x30efbe0, 0x22be100, 0xc0001359c0, 0xc0003db0e0, 0x16, 0x1, 0x0, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:617 +0x806\nnet.(*Resolver).lookupIP(0x30efbe0, 0x22be100, 0xc0001359c0, 0x1e6b341, 0x3, 0xc0003db0e0, 0x16, 0x2, 0x3, 0x0, ...)\n\t/usr/local/go/src/net/lookup_unix.go:102 +0xe5\nnet.glob..func1(0x22be100, 0xc0001359c0, 0xc0003eca80, 0x1e6b341, 0x3, 0xc0003db0e0, 0x16, 0xc0002c0180, 0x0, 0xc000101500, ...)\n\t/usr/local/go/src/net/hook.go:23 +0x72\nnet.(*Resolver).lookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/lookup.go:293 +0xba\ninternal/singleflight.(*Group).doCall(0x30efbf0, 0xc000330410, 0xc0003db140, 0x1a, 0xc000135a00)\n\t/usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e\ncreated by internal/singleflight.(*Group).DoChan\n\t/usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc\n\nstderr:\n+ /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'\ncommand terminated with exit code 255\n\nerror:\nexit status 255",
            },
            Code: 255,
        }
        error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9244 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1:
        Command stdout:
        I0710 08:33:03.980062     235 loader.go:372] Config loaded from file:  /tmp/icc-override.kubeconfig
        I0710 08:33:33.982858     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30001 milliseconds
        I0710 08:33:33.982946     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0710 08:34:03.984139     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
        I0710 08:34:03.984210     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0710 08:34:03.984228     235 shortcut.go:89] Error loading discovery information: Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0710 08:34:33.984820     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
        I0710 08:34:33.984893     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0710 08:35:03.985637     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
        I0710 08:35:03.985713     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0710 08:35:33.986522     235 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
        I0710 08:35:33.986600     235 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0710 08:35:33.986663     235 helpers.go:235] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: i/o timeout
        F0710 08:35:33.986686     235 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout
        goroutine 1 [running]:
        k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc0003c5a40, 0x65, 0xb7)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
        k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x30f1420, 0xc000000003, 0x0, 0x0, 0xc0001724d0, 0x2, 0x28132d1, 0xa, 0x74, 0x40e300)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
        k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x30f1420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0003ec3a0, 0x1, 0x1)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
        k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
        k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000a0500, 0x36, 0x1)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
        k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2286da0, 0xc0003866f0, 0x2108eb0)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935
        k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
        k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc00052b900, 0xc0007e6960, 0x1, 0x3)
... skipping 88 lines ...
        	/usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
        
        stderr:
        + /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'
        command terminated with exit code 255
        
        error:
        exit status 255
    occurred

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1102
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":40,"skipped":331,"failed":1,"failures":["[sig-cli] Kubectl client Simple pod should handle in-cluster config"]}
Jul 10 08:35:44.054: INFO: Running AfterSuite actions on all nodes
Jul 10 08:35:44.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:35:44.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:35:44.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:35:44.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:35:44.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul 10 08:35:44.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul 10 08:35:44.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":36,"skipped":303,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:30:19.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
Jul 10 08:30:37.312: INFO: PersistentVolumeClaim pvc-8gcz7 found and phase=Bound (13.134856724s)
Jul 10 08:30:37.312: INFO: Waiting up to 3m0s for PersistentVolume nfs-5gc6x to have phase Bound
Jul 10 08:30:37.473: INFO: PersistentVolume nfs-5gc6x found and phase=Bound (160.711316ms)
STEP: Checking pod has write access to PersistentVolume
Jul 10 08:30:37.798: INFO: Creating nfs test pod
Jul 10 08:30:37.960: INFO: Pod should terminate with exitcode 0 (success)
Jul 10 08:30:37.960: INFO: Waiting up to 5m0s for pod "pvc-tester-299zb" in namespace "pv-8589" to be "Succeeded or Failed"
Jul 10 08:30:38.121: INFO: Pod "pvc-tester-299zb": Phase="Pending", Reason="", readiness=false. Elapsed: 161.395375ms
Jul 10 08:30:40.284: INFO: Pod "pvc-tester-299zb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323645408s
Jul 10 08:30:42.446: INFO: Pod "pvc-tester-299zb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48637217s
Jul 10 08:30:44.608: INFO: Pod "pvc-tester-299zb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.648173232s
Jul 10 08:30:46.773: INFO: Pod "pvc-tester-299zb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.813002336s
Jul 10 08:30:48.935: INFO: Pod "pvc-tester-299zb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.974746019s
... skipping 129 lines ...
Jul 10 08:35:30.099: INFO: Pod "pvc-tester-299zb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.138809001s
Jul 10 08:35:32.261: INFO: Pod "pvc-tester-299zb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.301061238s
Jul 10 08:35:34.423: INFO: Pod "pvc-tester-299zb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.462976845s
Jul 10 08:35:36.585: INFO: Pod "pvc-tester-299zb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.625510043s
Jul 10 08:35:38.586: INFO: Deleting pod "pvc-tester-299zb" in namespace "pv-8589"
Jul 10 08:35:38.749: INFO: Wait up to 5m0s for pod "pvc-tester-299zb" to be fully deleted
Jul 10 08:35:43.072: FAIL: Unexpected error:
    <*errors.errorString | 0xc0049065c0>: {
        s: "pod \"pvc-tester-299zb\" did not exit with Success: pod \"pvc-tester-299zb\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-299zb\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-299zb" did not exit with Success: pod "pvc-tester-299zb" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-299zb" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.completeTest(0xc003e43080, 0x7997ea8, 0xc003bf8f20, 0xc002bc0bb0, 0x7, 0xc000839180, 0xc002c67a40)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52 +0x19c
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.3.3()
... skipping 23 lines ...
Jul 10 08:35:54.047: INFO: At 2021-07-10 08:30:21 +0000 UTC - event for nfs-server: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/volume/nfs:1.2" already present on machine
Jul 10 08:35:54.047: INFO: At 2021-07-10 08:30:21 +0000 UTC - event for nfs-server: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Created: Created container nfs-server
Jul 10 08:35:54.047: INFO: At 2021-07-10 08:30:21 +0000 UTC - event for nfs-server: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Started: Started container nfs-server
Jul 10 08:35:54.047: INFO: At 2021-07-10 08:30:23 +0000 UTC - event for pvc-8gcz7: {persistentvolume-controller } FailedBinding: no persistent volumes available for this claim and no storage class is set
Jul 10 08:35:54.047: INFO: At 2021-07-10 08:30:37 +0000 UTC - event for pvc-tester-299zb: {default-scheduler } Scheduled: Successfully assigned pv-8589/pvc-tester-299zb to ip-172-20-49-206.ap-northeast-2.compute.internal
Jul 10 08:35:54.047: INFO: At 2021-07-10 08:32:40 +0000 UTC - event for pvc-tester-299zb: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-9ttgq]: timed out waiting for the condition
Jul 10 08:35:54.047: INFO: At 2021-07-10 08:33:38 +0000 UTC - event for pvc-tester-299zb: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-5gc6x" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.2.53:/exports /var/lib/kubelet/pods/68e4b1fc-1108-4950-90d8-90c542bc7023/volumes/kubernetes.io~nfs/nfs-5gc6x
Output: mount.nfs: Connection timed out

Jul 10 08:35:54.047: INFO: At 2021-07-10 08:35:43 +0000 UTC - event for nfs-server: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Killing: Stopping container nfs-server
Jul 10 08:35:54.208: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 212 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and non-pre-bound PV: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:178

      Jul 10 08:35:43.072: Unexpected error:
          <*errors.errorString | 0xc0049065c0>: {
              s: "pod \"pvc-tester-299zb\" did not exit with Success: pod \"pvc-tester-299zb\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-299zb\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-299zb" did not exit with Success: pod "pvc-tester-299zb" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-299zb" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":36,"skipped":303,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access"]}
Jul 10 08:36:00.248: INFO: Running AfterSuite actions on all nodes
Jul 10 08:36:00.248: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:36:00.248: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:36:00.248: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:36:00.248: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:36:00.248: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 145 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":34,"skipped":203,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
Jul 10 08:36:12.044: INFO: Running AfterSuite actions on all nodes
Jul 10 08:36:12.044: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:36:12.044: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:36:12.044: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:36:12.044: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:36:12.044: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul 10 08:36:12.044: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul 10 08:36:12.044: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":41,"skipped":313,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 10 08:33:46.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":42,"skipped":313,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
Jul 10 08:36:20.677: INFO: Running AfterSuite actions on all nodes
Jul 10 08:36:20.677: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:36:20.677: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:36:20.677: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:36:20.677: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:36:20.677: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 15 lines ...
STEP: creating replication controller nodeport-test in namespace services-9565
I0710 08:33:18.770280   12931 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9565, replica count: 2
I0710 08:33:21.970859   12931 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 10 08:33:21.970: INFO: Creating new exec pod
Jul 10 08:33:27.621: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul 10 08:33:31.263: INFO: rc: 1
Jul 10 08:33:31.263: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 nodeport-test 80
nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:33:32.263: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul 10 08:33:35.919: INFO: rc: 1
Jul 10 08:33:35.920: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:33:36.263: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul 10 08:33:42.873: INFO: rc: 1
Jul 10 08:33:42.873: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 nodeport-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:33:43.263: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul 10 08:33:49.903: INFO: rc: 1
Jul 10 08:33:49.903: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:33:50.264: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul 10 08:33:53.873: INFO: rc: 1
Jul 10 08:33:53.873: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:33:54.263: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul 10 08:34:00.893: INFO: rc: 1
Jul 10 08:34:00.893: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:01.263: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul 10 08:34:07.870: INFO: rc: 1
Jul 10 08:34:07.870: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:08.263: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul 10 08:34:11.863: INFO: rc: 1
Jul 10 08:34:11.863: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 nodeport-test 80
+ echo hostName
nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:12.264: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul 10 08:34:13.931: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Jul 10 08:34:13.931: INFO: stdout: "nodeport-test-hpz84"
Jul 10 08:34:13.931: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.254.199 80'
Jul 10 08:34:17.626: INFO: rc: 1
Jul 10 08:34:17.626: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.254.199 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.70.254.199 80
nc: connect to 100.70.254.199 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:18.628: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.254.199 80'
Jul 10 08:34:22.244: INFO: rc: 1
Jul 10 08:34:22.244: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.254.199 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.70.254.199 80
nc: connect to 100.70.254.199 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:22.627: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.254.199 80'
Jul 10 08:34:24.271: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.254.199 80\nConnection to 100.70.254.199 80 port [tcp/http] succeeded!\n"
Jul 10 08:34:24.271: INFO: stdout: "nodeport-test-hpz84"
Jul 10 08:34:24.271: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.37.88 32451'
Jul 10 08:34:25.857: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.20.37.88 32451\nConnection to 172.20.37.88 32451 port [tcp/*] succeeded!\n"
Jul 10 08:34:25.857: INFO: stdout: "nodeport-test-79vh7"
Jul 10 08:34:25.857: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:34:29.462: INFO: rc: 1
Jul 10 08:34:29.462: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:30.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:34:35.111: INFO: rc: 1
Jul 10 08:34:35.111: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:35.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:34:39.407: INFO: rc: 1
Jul 10 08:34:39.407: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:39.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:34:43.419: INFO: rc: 1
Jul 10 08:34:43.420: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:43.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:34:47.434: INFO: rc: 1
Jul 10 08:34:47.434: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:47.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:34:51.408: INFO: rc: 1
Jul 10 08:34:51.408: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:51.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:34:55.422: INFO: rc: 1
Jul 10 08:34:55.422: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:55.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:34:59.410: INFO: rc: 1
Jul 10 08:34:59.410: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.41.208 32451
+ echo hostName
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:59.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:03.422: INFO: rc: 1
Jul 10 08:35:03.423: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:03.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:07.400: INFO: rc: 1
Jul 10 08:35:07.401: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:07.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:11.384: INFO: rc: 1
Jul 10 08:35:11.384: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:11.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:15.447: INFO: rc: 1
Jul 10 08:35:15.447: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ + nc -vecho -t -w hostName 2
 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:15.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:19.427: INFO: rc: 1
Jul 10 08:35:19.428: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:19.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:23.397: INFO: rc: 1
Jul 10 08:35:23.397: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:23.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:27.369: INFO: rc: 1
Jul 10 08:35:27.370: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:27.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:31.361: INFO: rc: 1
Jul 10 08:35:31.361: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:31.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:35.370: INFO: rc: 1
Jul 10 08:35:35.370: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:35.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:39.376: INFO: rc: 1
Jul 10 08:35:39.377: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:39.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:43.432: INFO: rc: 1
Jul 10 08:35:43.432: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.41.208 32451
+ echo hostName
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:43.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:47.373: INFO: rc: 1
Jul 10 08:35:47.373: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:47.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:51.387: INFO: rc: 1
Jul 10 08:35:51.387: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:51.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:55.365: INFO: rc: 1
Jul 10 08:35:55.366: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.41.208 32451
+ echo hostName
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:55.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:35:59.421: INFO: rc: 1
Jul 10 08:35:59.421: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:59.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:36:03.411: INFO: rc: 1
Jul 10 08:36:03.411: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:03.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:36:07.430: INFO: rc: 1
Jul 10 08:36:07.430: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:07.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:36:11.395: INFO: rc: 1
Jul 10 08:36:11.395: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:11.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:36:15.435: INFO: rc: 1
Jul 10 08:36:15.435: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.41.208 32451
+ echo hostName
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:15.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:36:19.420: INFO: rc: 1
Jul 10 08:36:19.420: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:19.462: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:36:23.432: INFO: rc: 1
Jul 10 08:36:23.432: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:23.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:36:27.393: INFO: rc: 1
Jul 10 08:36:27.393: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:27.463: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:36:31.648: INFO: rc: 1
Jul 10 08:36:31.648: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:31.648: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451'
Jul 10 08:36:35.594: INFO: rc: 1
Jul 10 08:36:35.594: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9565 exec execpodg5h68 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.41.208 32451:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.41.208 32451
nc: connect to 172.20.41.208 port 32451 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:35.594: FAIL: Unexpected error:
    <*errors.errorString | 0xc0046f61f0>: {
        s: "service is not reachable within 2m0s timeout on endpoint 172.20.41.208:32451 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 172.20.41.208:32451 over TCP protocol
occurred

... skipping 227 lines ...
• Failure [204.302 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 10 08:36:35.594: Unexpected error:
      <*errors.errorString | 0xc0046f61f0>: {
          s: "service is not reachable within 2m0s timeout on endpoint 172.20.41.208:32451 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 172.20.41.208:32451 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1190
------------------------------
{"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":36,"skipped":304,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}
Jul 10 08:36:41.943: INFO: Running AfterSuite actions on all nodes
Jul 10 08:36:41.943: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:36:41.943: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:36:41.943: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:36:41.943: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:36:41.943: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 15 lines ...
STEP: creating replication controller externalip-test in namespace services-2137
I0710 08:34:18.875190   12983 runners.go:190] Created replication controller with name: externalip-test, namespace: services-2137, replica count: 2
I0710 08:34:22.077575   12983 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 10 08:34:22.077: INFO: Creating new exec pod
Jul 10 08:34:25.566: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:34:32.140: INFO: rc: 1
Jul 10 08:34:32.140: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:33.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:34:40.075: INFO: rc: 1
Jul 10 08:34:40.075: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:40.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:34:47.124: INFO: rc: 1
Jul 10 08:34:47.124: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:47.141: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:34:54.088: INFO: rc: 1
Jul 10 08:34:54.088: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:34:54.141: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:35:01.149: INFO: rc: 1
Jul 10 08:35:01.149: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:02.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:35:09.111: INFO: rc: 1
Jul 10 08:35:09.111: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:09.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:35:16.152: INFO: rc: 1
Jul 10 08:35:16.152: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:17.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:35:24.075: INFO: rc: 1
Jul 10 08:35:24.075: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:24.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:35:31.112: INFO: rc: 1
Jul 10 08:35:31.112: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:31.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:35:38.074: INFO: rc: 1
Jul 10 08:35:38.075: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:38.141: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:35:45.064: INFO: rc: 1
Jul 10 08:35:45.064: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:45.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:35:52.088: INFO: rc: 1
Jul 10 08:35:52.088: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalip-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:52.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:35:59.093: INFO: rc: 1
Jul 10 08:35:59.093: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ + nc -vecho -t hostName -w
 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:35:59.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:36:06.075: INFO: rc: 1
Jul 10 08:36:06.075: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:06.141: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:36:13.119: INFO: rc: 1
Jul 10 08:36:13.119: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:13.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:36:20.073: INFO: rc: 1
Jul 10 08:36:20.073: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:20.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:36:27.052: INFO: rc: 1
Jul 10 08:36:27.052: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:27.140: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:36:34.201: INFO: rc: 1
Jul 10 08:36:34.202: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:34.202: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul 10 08:36:41.243: INFO: rc: 1
Jul 10 08:36:41.243: INFO: Service reachability failing with error: error running /tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2137 exec execpodrqsxx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalip-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 10 08:36:41.243: FAIL: Unexpected error:
    <*errors.errorString | 0xc0019542f0>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalip-test:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalip-test:80 over TCP protocol
occurred

... skipping 227 lines ...
• Failure [149.839 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1198

  Jul 10 08:36:41.243: Unexpected error:
      <*errors.errorString | 0xc0019542f0>: {
          s: "service is not reachable within 2m0s timeout on endpoint externalip-test:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint externalip-test:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1222
------------------------------
{"msg":"FAILED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":46,"skipped":464,"failed":4,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
Jul 10 08:36:47.587: INFO: Running AfterSuite actions on all nodes
Jul 10 08:36:47.587: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:36:47.587: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:36:47.587: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:36:47.587: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:36:47.587: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 25 lines ...
• [SLOW TEST:245.487 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":97,"failed":4,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
Jul 10 08:37:55.570: INFO: Running AfterSuite actions on all nodes
Jul 10 08:37:55.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:37:55.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:37:55.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:37:55.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:37:55.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
Jul 10 08:37:59.963: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Jul 10 08:37:59.963: INFO: Deleting pod "simpletest.rc-7nglw" in namespace "gc-7120"
Jul 10 08:38:00.134: INFO: Deleting pod "simpletest.rc-9klqj" in namespace "gc-7120"
Jul 10 08:38:00.302: INFO: Deleting pod "simpletest.rc-9mmh7" in namespace "gc-7120"
Jul 10 08:38:00.468: INFO: Deleting pod "simpletest.rc-9t2h9" in namespace "gc-7120"
Jul 10 08:38:00.635: INFO: Deleting pod "simpletest.rc-9xd5n" in namespace "gc-7120"
Jul 10 08:38:00.800: INFO: Deleting pod "simpletest.rc-hfz29" in namespace "gc-7120"
... skipping 10 lines ...
• [SLOW TEST:344.438 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":30,"skipped":279,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}
Jul 10 08:38:01.971: INFO: Running AfterSuite actions on all nodes
Jul 10 08:38:01.971: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:38:01.971: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:38:01.971: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:38:01.971: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:38:01.971: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 165 lines ...
Jul 10 08:27:53.032: INFO: Running '/tmp/kubectl3809781946/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3130 create -f -'
Jul 10 08:27:53.834: INFO: stderr: ""
Jul 10 08:27:53.834: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Jul 10 08:27:53.834: INFO: Waiting for all frontend pods to be Running.
Jul 10 08:27:54.036: INFO: Waiting for frontend to serve content.
Jul 10 08:28:24.203: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.29:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:28:59.370: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.5.40:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:29:34.537: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.29:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:30:09.707: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.222:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:30:44.874: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.222:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:31:20.042: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.5.40:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:31:55.210: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.5.40:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:32:30.372: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.5.40:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:33:05.557: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.222:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:33:40.724: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.29:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:34:15.887: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.222:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:34:51.050: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.5.40:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:35:26.212: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.222:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:36:01.374: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.222:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:36:36.536: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.5.40:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:37:11.698: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.222:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:37:46.864: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.29:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:38:22.030: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.29:80: i/o timeout"ServiceUnavailable0�"
Jul 10 08:38:27.031: FAIL: Frontend service did not start serving content in 600 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:375 +0x159
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00046dc80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 61 lines ...
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:27:54 +0000 UTC - event for agnhost-replica-6bcf79b489-2chgb: {kubelet ip-172-20-35-182.ap-northeast-2.compute.internal} Started: Started container replica
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:27:54 +0000 UTC - event for agnhost-replica-6bcf79b489-2chgb: {kubelet ip-172-20-35-182.ap-northeast-2.compute.internal} Created: Created container replica
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:27:54 +0000 UTC - event for agnhost-replica-6bcf79b489-2chgb: {kubelet ip-172-20-35-182.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:27:54 +0000 UTC - event for agnhost-replica-6bcf79b489-lg65v: {kubelet ip-172-20-41-208.ap-northeast-2.compute.internal} Created: Created container replica
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:27:54 +0000 UTC - event for agnhost-replica-6bcf79b489-lg65v: {kubelet ip-172-20-41-208.ap-northeast-2.compute.internal} Started: Started container replica
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:27:54 +0000 UTC - event for agnhost-replica-6bcf79b489-lg65v: {kubelet ip-172-20-41-208.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:29:03 +0000 UTC - event for agnhost-replica-6bcf79b489-lg65v: {kubelet ip-172-20-41-208.ap-northeast-2.compute.internal} BackOff: Back-off restarting failed container
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:29:38 +0000 UTC - event for agnhost-replica-6bcf79b489-2chgb: {kubelet ip-172-20-35-182.ap-northeast-2.compute.internal} BackOff: Back-off restarting failed container
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:38:30 +0000 UTC - event for frontend-685fc574d5-24vcx: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Killing: Stopping container guestbook-frontend
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:38:30 +0000 UTC - event for frontend-685fc574d5-8gdbl: {kubelet ip-172-20-35-182.ap-northeast-2.compute.internal} Killing: Stopping container guestbook-frontend
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:38:30 +0000 UTC - event for frontend-685fc574d5-slrl6: {kubelet ip-172-20-41-208.ap-northeast-2.compute.internal} Killing: Stopping container guestbook-frontend
Jul 10 08:38:32.555: INFO: At 2021-07-10 08:38:31 +0000 UTC - event for agnhost-primary-5db8ddd565-tdpd7: {kubelet ip-172-20-35-182.ap-northeast-2.compute.internal} Killing: Stopping container primary
Jul 10 08:38:32.718: INFO: POD                               NODE                                              PHASE    GRACE  CONDITIONS
Jul 10 08:38:32.718: INFO: agnhost-primary-5db8ddd565-tdpd7  ip-172-20-35-182.ap-northeast-2.compute.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:27:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:38:31 +0000 UTC ContainersNotReady containers with unready status: [primary]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:38:31 +0000 UTC ContainersNotReady containers with unready status: [primary]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-10 08:27:52 +0000 UTC  }]
... skipping 171 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul 10 08:38:27.031: Frontend service did not start serving content in 600 seconds.

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:375
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":22,"skipped":158,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
Jul 10 08:38:38.760: INFO: Running AfterSuite actions on all nodes
Jul 10 08:38:38.760: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:38:38.760: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:38:38.760: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:38:38.760: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:38:38.761: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 24 lines ...
Jul 10 08:35:39.697: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6560 from pod dns-6560/dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: the server is currently unable to handle the request (get pods dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146)
Jul 10 08:36:09.852: INFO: Unable to read wheezy_udp@dns-test-service.dns-6560.svc from pod dns-6560/dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: the server is currently unable to handle the request (get pods dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146)
Jul 10 08:36:40.007: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6560.svc from pod dns-6560/dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: the server is currently unable to handle the request (get pods dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146)
Jul 10 08:37:10.167: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6560.svc from pod dns-6560/dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: the server is currently unable to handle the request (get pods dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146)
Jul 10 08:37:40.324: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6560.svc from pod dns-6560/dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: the server is currently unable to handle the request (get pods dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146)
Jul 10 08:38:10.481: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-6560.svc from pod dns-6560/dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: the server is currently unable to handle the request (get pods dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146)
Jul 10 08:38:39.076: FAIL: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6560.svc from pod dns-6560/dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-6560/pods/dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146/proxy/results/wheezy_tcp@_http._tcp.test-service-2.dns-6560.svc": context deadline exceeded

Full Stack Trace
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x7904988, 0xc00005e058, 0x7fe871b4aa68, 0x18, 0xc0030fcff0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x7904988, 0xc00005e058, 0xc004054500, 0x2a27200, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
testing.tRunner(0xc000326900, 0x72c1e78)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
E0710 08:38:39.077535   12808 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jul 10 08:38:39.076: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6560.svc from pod dns-6560/dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: Get \"https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-6560/pods/dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146/proxy/results/wheezy_tcp@_http._tcp.test-service-2.dns-6560.svc\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x7904988, 0xc00005e058, 0x7fe871b4aa68, 0x18, 0xc0030fcff0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x7904988, 0xc00005e058, 0xc004054500, 0x2a27200, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x7904988, 0xc00005e058, 0xc0030fcf01, 0xc0030fcff0, 0xc004054500, 0x684b7c0, 0xc004054500)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x7904988, 0xc00005e058, 0x12a05f200, 0x8bb2c97000, 0xc004054500, 0x6d95be0, 0x2535101)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003bc0cb0, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc002468780, 0x1c, 0x28, 0x7058c6e, 0x7, 0xc003745400, 0x7997ea8, 0xc00239b4a0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0008d58c0, 0xc003745400, 0xc002468780, 0x1c, 0x28)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.6()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:237 +0xec5\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000326900)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000326900)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc000326900, 0x72c1e78)\n\t/usr/local/go/src/testing/testing.go:1193 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1238 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6be52e0, 0xc002b021c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x6be52e0, 0xc002b021c0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc002030480, 0x17b, 0x88e2b66, 0x7d, 0xd9, 0xc00020fc00, 0xa8c)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x6312a40, 0x77bbcc0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc002030480, 0x17b, 0xc00530b590, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc002030480, 0x17b, 0xc00530b678, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x70fc2e0, 0x24, 0xc00530b8d8, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x7904988, 0xc00005e058, 0x7fe871b4aa68, 0x18, 0xc0030fcff0)
... skipping 61 lines ...
Jul 10 08:38:39.732: INFO: At 2021-07-10 08:33:37 +0000 UTC - event for dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul 10 08:38:39.732: INFO: At 2021-07-10 08:33:37 +0000 UTC - event for dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Created: Created container jessie-querier
Jul 10 08:38:39.732: INFO: At 2021-07-10 08:33:37 +0000 UTC - event for dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Started: Started container jessie-querier
Jul 10 08:38:39.732: INFO: At 2021-07-10 08:38:39 +0000 UTC - event for dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Killing: Stopping container webserver
Jul 10 08:38:39.732: INFO: At 2021-07-10 08:38:39 +0000 UTC - event for dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Killing: Stopping container jessie-querier
Jul 10 08:38:39.732: INFO: At 2021-07-10 08:38:39 +0000 UTC - event for dns-test-eb0886c8-e8c7-446a-ab09-fefd5f4c9146: {kubelet ip-172-20-37-88.ap-northeast-2.compute.internal} Killing: Stopping container querier
Jul 10 08:38:39.732: INFO: At 2021-07-10 08:38:39 +0000 UTC - event for dns-test-service: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint dns-6560/dns-test-service: Operation cannot be fulfilled on endpoints "dns-test-service": the object has been modified; please apply your changes to the latest version and try again
Jul 10 08:38:39.889: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul 10 08:38:39.889: INFO: 
Jul 10 08:38:40.047: INFO: 
Logging node info for node ip-172-20-35-182.ap-northeast-2.compute.internal
Jul 10 08:38:40.204: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-35-182.ap-northeast-2.compute.internal    1a6c9be0-f2a0-437a-b5ad-99701c252cfb 47893 0 2021-07-10 08:00:56 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:nodes-ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-35-182.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-35-182.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-07e3a6916f931b901"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-10 08:00:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-10 08:00:56 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-10 08:00:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-10 08:01:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.5.0/24\"":{}}}} } {kubelet Update v1 2021-07-10 08:34:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.5.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-07e3a6916f931b901,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-10 08:34:27 +0000 UTC,LastTransitionTime:2021-07-10 08:00:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-10 08:34:27 +0000 UTC,LastTransitionTime:2021-07-10 08:00:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-10 08:34:27 +0000 UTC,LastTransitionTime:2021-07-10 08:00:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-10 08:34:27 +0000 UTC,LastTransitionTime:2021-07-10 08:01:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.35.182,},NodeAddress{Type:ExternalIP,Address:3.36.67.24,},NodeAddress{Type:InternalDNS,Address:ip-172-20-35-182.ap-northeast-2.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-35-182.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-36-67-24.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec24797d4aec87f81c975a28ffae5bb1,SystemUUID:ec24797d-4aec-87f8-1c97-5a28ffae5bb1,BootID:56033625-3348-442f-9130-e62e8679b3df,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.1,KubeProxyVersion:v1.22.0-beta.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.1],SizeBytes:105483977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[docker.io/library/nginx@sha256:8df46d7414eda82c2a8c9c50926545293811ae59f977825845dda7d558b4125b docker.io/library/nginx:latest],SizeBytes:53757110,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul 10 08:38:40.204: INFO: 
... skipping 157 lines ...
[It] should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:449
STEP: create the rc
STEP: delete the rc
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
Jul 10 08:39:03.651: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Jul 10 08:39:03.651: INFO: Deleting pod "simpletest.rc-d958s" in namespace "gc-8506"
Jul 10 08:39:03.818: INFO: Deleting pod "simpletest.rc-qcc2h" in namespace "gc-8506"
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 10 08:39:03.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8506" for this suite.
... skipping 2 lines ...
• [SLOW TEST:337.770 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:449
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":37,"skipped":303,"failed":3,"failures":["[sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
Jul 10 08:39:04.321: INFO: Running AfterSuite actions on all nodes
Jul 10 08:39:04.321: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:39:04.321: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:39:04.321: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:39:04.321: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:39:04.321: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 28 lines ...
Jul 10 08:34:35.051: INFO: PersistentVolumeClaim pvc-pgsxt found but phase is Pending instead of Bound.
Jul 10 08:34:37.211: INFO: PersistentVolumeClaim pvc-pgsxt found and phase=Bound (15.290815583s)
Jul 10 08:34:37.211: INFO: Waiting up to 3m0s for PersistentVolume aws-bqj7z to have phase Bound
Jul 10 08:34:37.371: INFO: PersistentVolume aws-bqj7z found and phase=Bound (160.096286ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-q6d2
STEP: Creating a pod to test exec-volume-test
Jul 10 08:34:37.854: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-q6d2" in namespace "volume-9278" to be "Succeeded or Failed"
Jul 10 08:34:38.014: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 160.473617ms
Jul 10 08:34:40.175: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321091435s
Jul 10 08:34:42.337: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483143673s
Jul 10 08:34:44.499: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.644814791s
Jul 10 08:34:46.661: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.806658879s
Jul 10 08:34:48.822: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.967852746s
... skipping 127 lines ...
Jul 10 08:39:25.482: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4m47.627925624s
Jul 10 08:39:27.644: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.789710049s
Jul 10 08:39:29.805: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.950899045s
Jul 10 08:39:31.966: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.112179142s
Jul 10 08:39:34.127: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.272999131s
Jul 10 08:39:36.289: INFO: Pod "exec-volume-test-preprovisionedpv-q6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.434980001s
Jul 10 08:39:38.612: INFO: Failed to get logs from node "ip-172-20-49-206.ap-northeast-2.compute.internal" pod "exec-volume-test-preprovisionedpv-q6d2" container "exec-container-preprovisionedpv-q6d2": the server rejected our request for an unknown reason (get pods exec-volume-test-preprovisionedpv-q6d2)
STEP: delete the pod
Jul 10 08:39:38.773: INFO: Waiting for pod exec-volume-test-preprovisionedpv-q6d2 to disappear
Jul 10 08:39:38.934: INFO: Pod exec-volume-test-preprovisionedpv-q6d2 still exists
Jul 10 08:39:40.934: INFO: Waiting for pod exec-volume-test-preprovisionedpv-q6d2 to disappear
Jul 10 08:39:41.095: INFO: Pod exec-volume-test-preprovisionedpv-q6d2 still exists
Jul 10 08:39:42.935: INFO: Waiting for pod exec-volume-test-preprovisionedpv-q6d2 to disappear
Jul 10 08:39:43.096: INFO: Pod exec-volume-test-preprovisionedpv-q6d2 no longer exists
Jul 10 08:39:43.096: FAIL: Unexpected error:
    <*errors.errorString | 0xc001f72280>: {
        s: "expected pod \"exec-volume-test-preprovisionedpv-q6d2\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-preprovisionedpv-q6d2\" to be \"Succeeded or Failed\"",
    }
    expected pod "exec-volume-test-preprovisionedpv-q6d2" success: Gave up after waiting 5m0s for pod "exec-volume-test-preprovisionedpv-q6d2" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc001c269a0, 0x707a269, 0x10, 0xc003428400, 0x0, 0xc0031e70d8, 0x1, 0x1, 0x72c4fd0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 17 lines ...
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-9278".
STEP: Found 6 events.
Jul 10 08:39:44.370: INFO: At 2021-07-10 08:34:21 +0000 UTC - event for pvc-pgsxt: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "volume-9278" not found
Jul 10 08:39:44.370: INFO: At 2021-07-10 08:34:37 +0000 UTC - event for exec-volume-test-preprovisionedpv-q6d2: {default-scheduler } Scheduled: Successfully assigned volume-9278/exec-volume-test-preprovisionedpv-q6d2 to ip-172-20-49-206.ap-northeast-2.compute.internal
Jul 10 08:39:44.370: INFO: At 2021-07-10 08:34:53 +0000 UTC - event for exec-volume-test-preprovisionedpv-q6d2: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "aws-bqj7z" : rpc error: code = NotFound desc = Instance "i-088af31c3b0e30700" not found
Jul 10 08:39:44.370: INFO: At 2021-07-10 08:35:11 +0000 UTC - event for exec-volume-test-preprovisionedpv-q6d2: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "aws-bqj7z" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul 10 08:39:44.370: INFO: At 2021-07-10 08:36:40 +0000 UTC - event for exec-volume-test-preprovisionedpv-q6d2: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[vol1], unattached volumes=[kube-api-access-t5sk7 vol1]: timed out waiting for the condition
Jul 10 08:39:44.370: INFO: At 2021-07-10 08:38:59 +0000 UTC - event for exec-volume-test-preprovisionedpv-q6d2: {kubelet ip-172-20-49-206.ap-northeast-2.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[vol1], unattached volumes=[vol1 kube-api-access-t5sk7]: timed out waiting for the condition
Jul 10 08:39:44.531: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul 10 08:39:44.531: INFO: 
Jul 10 08:39:44.692: INFO: 
Logging node info for node ip-172-20-35-182.ap-northeast-2.compute.internal
... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Jul 10 08:39:43.097: Unexpected error:
          <*errors.errorString | 0xc001f72280>: {
              s: "expected pod \"exec-volume-test-preprovisionedpv-q6d2\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-preprovisionedpv-q6d2\" to be \"Succeeded or Failed\"",
          }
          expected pod "exec-volume-test-preprovisionedpv-q6d2" success: Gave up after waiting 5m0s for pod "exec-volume-test-preprovisionedpv-q6d2" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":35,"skipped":326,"failed":3,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume"]}
Jul 10 08:39:50.311: INFO: Running AfterSuite actions on all nodes
Jul 10 08:39:50.311: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul 10 08:39:50.311: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul 10 08:39:50.311: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul 10 08:39:50.311: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul 10 08:39:50.311: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 1612 lines ...
FailureEerror trying to reach service: dial tcp 100.96.1.231:162: ... (503; 30.322416201s)
Jul 10 08:39:40.119: INFO: (19) /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.231:80: i... (503; 30.322393301s)
Jul 10 08:39:40.282: INFO: Pod proxy-service-484xx-l4dgk has the following error logs: 
Jul 10 08:39:40.283: FAIL: 0 (503; 30.165110058s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.165815908s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.165890869s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.165748239s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.165738318s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.166169828s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.166462279s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.166331708s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.166333489s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.166270609s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.318828754s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.318882045s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.318852135s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.319990374s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.320129595s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.320015004s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.165620434s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.165697104s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.165507974s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.165622634s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.165481384s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.165642794s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.166750544s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.166792844s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.166799714s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.166828654s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.321783319s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.321856929s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.321829779s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.322898639s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.326235789s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.326262119s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.168268626s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.168622396s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.168396846s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.168535566s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.168417086s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.168553056s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.168622236s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.168596276s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.168703596s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.168617476s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.322551612s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.322668572s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.322749052s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.322698712s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.322833512s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.322626152s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.163553336s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.167009556s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.167139056s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.167295266s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.167025166s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.167330616s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.167320756s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.167330395s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.167204636s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.167237985s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.322363431s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.322527571s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.322507341s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.322608322s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.322598262s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.323646532s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.165556265s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.165673025s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.165709865s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.165702436s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.165878616s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.165682376s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.167166155s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.167334495s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.167184445s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.167217955s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.322880801s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.323139941s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.323071551s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.323125091s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.322916931s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.323087392s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.164055739s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.164144959s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.171465188s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.171449128s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.171633428s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.171580348s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.171633977s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.171540659s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.171631608s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.171633059s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.324486864s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.324693484s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.324691184s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.324755745s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.326227915s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.326281345s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.16996224s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.17022426s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.17022551s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.17026218s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.17028991s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.187858049s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.187921389s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.19050849s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.190542589s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.205593909s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.334734015s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.334718426s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.334760535s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.334736595s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.334719886s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.334669906s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.19538677s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.195720681s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.195905161s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.195398961s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.19575393s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.20660159s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.20652186s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.20649226s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.20657489s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.20783006s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.336671938s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.336724508s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.336774058s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.336822248s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.336691188s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.339347138s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.162990863s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.166900623s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.166753873s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.167264053s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.167071223s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.166901833s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.166851603s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.166965533s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.166974073s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.166958063s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.331309349s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.337884099s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.338533439s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.342910248s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.342863648s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.342821318s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.165160618s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.165372249s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.166542228s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.166812698s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.166935248s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.166918778s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.166881688s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.166885508s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.166876978s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.166934048s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.328206984s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.329771854s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.335298914s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.335325574s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.338713354s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.338890324s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.165586078s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.165650928s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.165807968s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.165648348s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.166303168s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.166459658s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.166442698s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.166765718s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.166580328s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.166536608s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.321806244s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.321823634s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.324080114s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.324273824s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.324275544s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.324491314s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.164495247s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.164697707s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.164649507s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.164562947s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.164697657s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.164646647s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.164654287s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.166351947s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.166248627s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.166471337s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.322148943s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.322209693s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.323029262s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.323182682s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.323129823s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
11 (503; 30.323268012s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.166827451s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.167017362s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.166961431s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.167406902s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.167118933s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.167215213s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.167118242s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.167226171s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.167489093s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.167293533s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.322260408s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.322325948s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.322789699s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.322522349s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.322575929s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
12 (503; 30.322953178s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.164365252s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.164259632s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.164248422s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.164277342s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.164183842s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.164344042s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.165784212s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.165551232s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.165783112s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.165811402s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.321884299s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.321859079s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.321970229s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.323284989s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.323325418s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
13 (503; 30.323464188s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.16351395s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.163719379s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.163898029s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.163962629s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.1640079s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.164229789s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.166695709s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.16693705s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.16682476s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.166833289s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.322396206s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.322489736s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.322434375s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.322610365s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.322425476s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
14 (503; 30.322351616s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.16511627s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.16529729s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.16541955s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.16532615s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.16553342s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.1652693s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.16517225s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.16547359s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.16556497s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.16558357s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.322619576s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.322743986s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.322616326s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.322645006s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.322641326s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
15 (503; 30.322757956s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.166044473s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.166034043s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.166358673s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.166060403s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.166181203s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.166094583s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.166630243s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.166612393s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.166537203s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.166553503s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.32183918s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.32181913s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.32221469s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.32230194s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.32219195s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
16 (503; 30.32254507s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.164670761s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.165432891s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.166136671s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.16620271s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.16601754s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.166080211s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.16606618s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.166185191s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.166213491s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.166220681s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.32168327s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.32405148s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.324194789s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.32433324s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.32420998s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
17 (503; 30.32427105s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.165491182s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.165340932s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.165633352s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.165568402s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.165572441s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.165645871s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.165619121s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.165854432s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.166130561s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.166282392s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.322097432s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.322058342s): path /api/v1/namespaces/proxy-7143/services/proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.322167692s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.322142791s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.322263902s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
18 (503; 30.323249831s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
19 (503; 30.165003779s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
19 (503; 30.164906269s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
19 (503; 30.164944819s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
19 (503; 30.164999719s): path /api/v1/namespaces/proxy-7143/pods/proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
19 (503; 30.165222209s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
19 (503; 30.165404079s): path /api/v1/namespaces/proxy-7143/pods/https:proxy-service-484xx-l4dgk:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
19 (503; 30.166471319s): path /api/v1/namespaces/proxy-7143/pods/http:proxy-service-484xx-l4dgk:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
19 (503; 30.168899248s): path /api/v1/namespaces/proxy-7143/services/https:proxy-service-484xx:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
19 (503; 30.169131818s): path /api/v1/namespaces/proxy-7143/services/http:proxy-service-484xx:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds: