This job view page is being replaced by Spyglass soon. Check out the new job view.
PRolemarkus: Enable IRSA for CCM
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-07-05 08:51
Elapsed1h2m
Revision5e0494a267e626b2ef68757737f4a3a32052fc0d
Refs 11818

No Test Failures!


Error lines from build-log.txt

... skipping 484 lines ...
I0705 08:56:40.925659    4348 dumplogs.go:38] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops toolbox dump --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0705 08:56:40.942732   11858 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0705 08:56:40.942828   11858 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0705 08:56:40.942833   11858 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
W0705 08:56:41.418997    4348 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0705 08:56:41.419077    4348 down.go:48] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops delete cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --yes
I0705 08:56:41.434458   11868 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0705 08:56:41.434636   11868 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0705 08:56:41.434924   11868 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
I0705 08:56:41.920552    4348 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/07/05 08:56:41 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0705 08:56:41.927839    4348 http.go:37] curl https://ip.jsb.workers.dev
I0705 08:56:42.016762    4348 up.go:144] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.0-beta.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210621 --channel=alpha --networking=kubenet --container-runtime=containerd --override=cluster.spec.cloudControllerManager.cloudProvider=aws --override=cluster.spec.serviceAccountIssuerDiscovery.discoveryStore=s3://k8s-kops-prow/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery --override=cluster.spec.serviceAccountIssuerDiscovery.enableAWSOIDCProvider=true --admin-access 35.202.198.26/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-east-2a --master-size c5.large
I0705 08:56:42.033276   11878 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0705 08:56:42.034060   11878 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0705 08:56:42.034065   11878 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
I0705 08:56:42.076883   11878 create_cluster.go:739] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 33 lines ...
I0705 08:57:05.400006    4348 up.go:181] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops validate cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0705 08:57:05.417823   11899 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0705 08:57:05.417912   11899 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0705 08:57:05.417917   11899 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
Validating cluster e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io

W0705 08:57:06.716191   11899 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:57:16.756911   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:57:26.800907   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:57:36.843524   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:57:46.873773   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:57:56.918379   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:58:06.946654   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:58:16.979183   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:58:27.006799   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:58:37.050983   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
W0705 08:58:47.076728   11899 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:58:57.109440   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:59:07.143125   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:59:17.169385   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:59:27.217123   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:59:37.252356   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:59:47.281556   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 08:59:57.328517   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 09:00:07.358199   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 09:00:17.389536   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 09:00:27.418775   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 09:00:37.465724   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 8 lines ...
Machine	i-07289ac18778d31dd				machine "i-07289ac18778d31dd" has not yet joined cluster
Machine	i-0ffd9423c66cf1001				machine "i-0ffd9423c66cf1001" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-6f594f4c58-7vf57	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-7vf57" is pending
Pod	kube-system/coredns-f45c4bf76-4zr55		system-cluster-critical pod "coredns-f45c4bf76-4zr55" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-7vfrc	system-cluster-critical pod "ebs-csi-controller-566c97f85c-7vfrc" is pending

Validation Failed
W0705 09:00:48.788724   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 8 lines ...
Machine	i-07289ac18778d31dd				machine "i-07289ac18778d31dd" has not yet joined cluster
Machine	i-0ffd9423c66cf1001				machine "i-0ffd9423c66cf1001" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-6f594f4c58-7vf57	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-7vf57" is pending
Pod	kube-system/coredns-f45c4bf76-4zr55		system-cluster-critical pod "coredns-f45c4bf76-4zr55" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-7vfrc	system-cluster-critical pod "ebs-csi-controller-566c97f85c-7vfrc" is pending

Validation Failed
W0705 09:00:59.652426   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 12 lines ...
Node	ip-172-20-57-184.us-east-2.compute.internal	node "ip-172-20-57-184.us-east-2.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-autoscaler-6f594f4c58-7vf57	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-7vf57" is pending
Pod	kube-system/coredns-f45c4bf76-4zr55		system-cluster-critical pod "coredns-f45c4bf76-4zr55" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-7vfrc	system-cluster-critical pod "ebs-csi-controller-566c97f85c-7vfrc" is pending
Pod	kube-system/ebs-csi-node-hn2km			system-node-critical pod "ebs-csi-node-hn2km" is pending

Validation Failed
W0705 09:01:10.537188   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 9 lines ...
KIND	NAME									MESSAGE
Node	ip-172-20-55-216.us-east-2.compute.internal				node "ip-172-20-55-216.us-east-2.compute.internal" of role "node" is not ready
Pod	kube-system/ebs-csi-controller-566c97f85c-7vfrc				system-cluster-critical pod "ebs-csi-controller-566c97f85c-7vfrc" is pending
Pod	kube-system/ebs-csi-node-g4pqp						system-node-critical pod "ebs-csi-node-g4pqp" is pending
Pod	kube-system/kube-proxy-ip-172-20-38-136.us-east-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-38-136.us-east-2.compute.internal" is pending

Validation Failed
W0705 09:01:21.443234   11899 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 922 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:03:49.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Flexvolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 42 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:03:50.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5326" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:03:50.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3286" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:03:50.798: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename kubectl
W0705 09:03:50.270789   12630 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul  5 09:03:50.270: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[It] should ignore not found error with --for=delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1836
STEP: calling kubectl wait --for=delete
Jul  5 09:03:50.350: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5011 wait --for=delete pod/doesnotexist'
Jul  5 09:03:50.847: INFO: stderr: ""
Jul  5 09:03:50.848: INFO: stdout: ""
Jul  5 09:03:50.848: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5011 wait --for=delete pod --selector=app.kubernetes.io/name=noexist'
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:03:51.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5011" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete","total":-1,"completed":1,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:03:51.172: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 138 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:03:52.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5653" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:03:52.597: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:03:53.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "apf-7550" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":1,"skipped":11,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:03:53.296: INFO: Only supported for providers [openstack] (not aws)
... skipping 96 lines ...
• [SLOW TEST:8.890 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should delete a collection of pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:03:58.442: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 61 lines ...
• [SLOW TEST:10.051 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should validate Replicaset Status endpoints [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  5 09:03:52.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66765271-6a55-4b66-8f46-367e71f37737" in namespace "projected-5798" to be "Succeeded or Failed"
Jul  5 09:03:52.095: INFO: Pod "downwardapi-volume-66765271-6a55-4b66-8f46-367e71f37737": Phase="Pending", Reason="", readiness=false. Elapsed: 29.006152ms
Jul  5 09:03:54.126: INFO: Pod "downwardapi-volume-66765271-6a55-4b66-8f46-367e71f37737": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059332589s
Jul  5 09:03:56.163: INFO: Pod "downwardapi-volume-66765271-6a55-4b66-8f46-367e71f37737": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096436968s
Jul  5 09:03:58.194: INFO: Pod "downwardapi-volume-66765271-6a55-4b66-8f46-367e71f37737": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127220861s
Jul  5 09:04:00.224: INFO: Pod "downwardapi-volume-66765271-6a55-4b66-8f46-367e71f37737": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157445152s
Jul  5 09:04:02.257: INFO: Pod "downwardapi-volume-66765271-6a55-4b66-8f46-367e71f37737": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.190366909s
STEP: Saw pod success
Jul  5 09:04:02.257: INFO: Pod "downwardapi-volume-66765271-6a55-4b66-8f46-367e71f37737" satisfied condition "Succeeded or Failed"
Jul  5 09:04:02.288: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod downwardapi-volume-66765271-6a55-4b66-8f46-367e71f37737 container client-container: <nil>
STEP: delete the pod
Jul  5 09:04:02.862: INFO: Waiting for pod downwardapi-volume-66765271-6a55-4b66-8f46-367e71f37737 to disappear
Jul  5 09:04:02.892: INFO: Pod downwardapi-volume-66765271-6a55-4b66-8f46-367e71f37737 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:13.191 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:16.627 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0}
[BeforeEach] [sig-node] PodTemplates
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:04:06.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename podtemplate
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
• [SLOW TEST:17.515 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:17.892 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:03:52.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-4823/configmap-test-a380614b-c1df-4106-8ad7-dd7cd490efe6
STEP: Creating a pod to test consume configMaps
Jul  5 09:03:52.961: INFO: Waiting up to 5m0s for pod "pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5" in namespace "configmap-4823" to be "Succeeded or Failed"
Jul  5 09:03:52.990: INFO: Pod "pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 29.798175ms
Jul  5 09:03:55.021: INFO: Pod "pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060593147s
Jul  5 09:03:57.052: INFO: Pod "pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091786451s
Jul  5 09:03:59.084: INFO: Pod "pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123203014s
Jul  5 09:04:01.114: INFO: Pod "pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153766065s
Jul  5 09:04:03.144: INFO: Pod "pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.183615903s
Jul  5 09:04:05.175: INFO: Pod "pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.214277776s
Jul  5 09:04:07.206: INFO: Pod "pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.244986896s
Jul  5 09:04:09.237: INFO: Pod "pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.276194225s
STEP: Saw pod success
Jul  5 09:04:09.237: INFO: Pod "pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5" satisfied condition "Succeeded or Failed"
Jul  5 09:04:09.288: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5 container env-test: <nil>
STEP: delete the pod
Jul  5 09:04:09.630: INFO: Waiting for pod pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5 to disappear
Jul  5 09:04:09.661: INFO: Pod pod-configmaps-ab27fa60-2ee6-4a0d-8600-55a07a54e5c5 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:17.103 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:09.749: INFO: Only supported for providers [openstack] (not aws)
... skipping 43 lines ...
STEP: Creating a kubernetes client
Jul  5 09:03:49.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
W0705 09:03:49.667883   12522 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul  5 09:03:49.668: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:233
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:09.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2109" for this suite.


• [SLOW TEST:20.397 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:233
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
• [SLOW TEST:21.111 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 71 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":1,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 71 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:04:09.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  5 09:04:10.000: INFO: Waiting up to 5m0s for pod "pod-75eb8445-eef6-4c67-bbbc-59fa6a372b79" in namespace "emptydir-5671" to be "Succeeded or Failed"
Jul  5 09:04:10.030: INFO: Pod "pod-75eb8445-eef6-4c67-bbbc-59fa6a372b79": Phase="Pending", Reason="", readiness=false. Elapsed: 29.510037ms
Jul  5 09:04:12.060: INFO: Pod "pod-75eb8445-eef6-4c67-bbbc-59fa6a372b79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060284983s
Jul  5 09:04:14.091: INFO: Pod "pod-75eb8445-eef6-4c67-bbbc-59fa6a372b79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091242419s
STEP: Saw pod success
Jul  5 09:04:14.091: INFO: Pod "pod-75eb8445-eef6-4c67-bbbc-59fa6a372b79" satisfied condition "Succeeded or Failed"
Jul  5 09:04:14.121: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-75eb8445-eef6-4c67-bbbc-59fa6a372b79 container test-container: <nil>
STEP: delete the pod
Jul  5 09:04:14.186: INFO: Waiting for pod pod-75eb8445-eef6-4c67-bbbc-59fa6a372b79 to disappear
Jul  5 09:04:14.215: INFO: Pod pod-75eb8445-eef6-4c67-bbbc-59fa6a372b79 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:14.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5671" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:25.834 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replace and Patch tests [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Jul  5 09:04:08.301: INFO: PersistentVolumeClaim pvc-pr84f found but phase is Pending instead of Bound.
Jul  5 09:04:10.330: INFO: PersistentVolumeClaim pvc-pr84f found and phase=Bound (6.131108511s)
Jul  5 09:04:10.331: INFO: Waiting up to 3m0s for PersistentVolume local-fmc9p to have phase Bound
Jul  5 09:04:10.359: INFO: PersistentVolume local-fmc9p found and phase=Bound (28.736346ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4tdn
STEP: Creating a pod to test subpath
Jul  5 09:04:10.449: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4tdn" in namespace "provisioning-8403" to be "Succeeded or Failed"
Jul  5 09:04:10.478: INFO: Pod "pod-subpath-test-preprovisionedpv-4tdn": Phase="Pending", Reason="", readiness=false. Elapsed: 28.705968ms
Jul  5 09:04:12.507: INFO: Pod "pod-subpath-test-preprovisionedpv-4tdn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058231653s
Jul  5 09:04:14.538: INFO: Pod "pod-subpath-test-preprovisionedpv-4tdn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088446993s
Jul  5 09:04:16.567: INFO: Pod "pod-subpath-test-preprovisionedpv-4tdn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118092364s
Jul  5 09:04:18.597: INFO: Pod "pod-subpath-test-preprovisionedpv-4tdn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.147842861s
STEP: Saw pod success
Jul  5 09:04:18.597: INFO: Pod "pod-subpath-test-preprovisionedpv-4tdn" satisfied condition "Succeeded or Failed"
Jul  5 09:04:18.626: INFO: Trying to get logs from node ip-172-20-38-136.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-4tdn container test-container-volume-preprovisionedpv-4tdn: <nil>
STEP: delete the pod
Jul  5 09:04:18.689: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4tdn to disappear
Jul  5 09:04:18.718: INFO: Pod pod-subpath-test-preprovisionedpv-4tdn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4tdn
Jul  5 09:04:18.718: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4tdn" in namespace "provisioning-8403"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":18,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:19.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-7842" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:19.728: INFO: Only supported for providers [gce gke] (not aws)
... skipping 87 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-a8407c30-78a5-4c76-a599-e954069c70fd
STEP: Creating a pod to test consume secrets
Jul  5 09:04:12.129: INFO: Waiting up to 5m0s for pod "pod-secrets-9653631f-2e03-4e84-aad3-9cd7d15708bd" in namespace "secrets-1032" to be "Succeeded or Failed"
Jul  5 09:04:12.158: INFO: Pod "pod-secrets-9653631f-2e03-4e84-aad3-9cd7d15708bd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.938509ms
Jul  5 09:04:14.188: INFO: Pod "pod-secrets-9653631f-2e03-4e84-aad3-9cd7d15708bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058883363s
Jul  5 09:04:16.218: INFO: Pod "pod-secrets-9653631f-2e03-4e84-aad3-9cd7d15708bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088598996s
Jul  5 09:04:18.248: INFO: Pod "pod-secrets-9653631f-2e03-4e84-aad3-9cd7d15708bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119239413s
Jul  5 09:04:20.278: INFO: Pod "pod-secrets-9653631f-2e03-4e84-aad3-9cd7d15708bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.149358728s
STEP: Saw pod success
Jul  5 09:04:20.278: INFO: Pod "pod-secrets-9653631f-2e03-4e84-aad3-9cd7d15708bd" satisfied condition "Succeeded or Failed"
Jul  5 09:04:20.309: INFO: Trying to get logs from node ip-172-20-38-136.us-east-2.compute.internal pod pod-secrets-9653631f-2e03-4e84-aad3-9cd7d15708bd container secret-env-test: <nil>
STEP: delete the pod
Jul  5 09:04:20.375: INFO: Waiting for pod pod-secrets-9653631f-2e03-4e84-aad3-9cd7d15708bd to disappear
Jul  5 09:04:20.404: INFO: Pod pod-secrets-9653631f-2e03-4e84-aad3-9cd7d15708bd no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.551 seconds]
[sig-node] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 141 lines ...
• [SLOW TEST:7.002 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:20.942: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 210 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-8b896d67-8bc9-4cba-a2f6-0d4c72d2bcab
STEP: Creating a pod to test consume configMaps
Jul  5 09:04:14.511: INFO: Waiting up to 5m0s for pod "pod-configmaps-1f5db2c3-5a26-47c9-8653-beb601774272" in namespace "configmap-3441" to be "Succeeded or Failed"
Jul  5 09:04:14.541: INFO: Pod "pod-configmaps-1f5db2c3-5a26-47c9-8653-beb601774272": Phase="Pending", Reason="", readiness=false. Elapsed: 29.419032ms
Jul  5 09:04:16.571: INFO: Pod "pod-configmaps-1f5db2c3-5a26-47c9-8653-beb601774272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059541231s
Jul  5 09:04:18.602: INFO: Pod "pod-configmaps-1f5db2c3-5a26-47c9-8653-beb601774272": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090502742s
Jul  5 09:04:20.632: INFO: Pod "pod-configmaps-1f5db2c3-5a26-47c9-8653-beb601774272": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121113977s
Jul  5 09:04:22.667: INFO: Pod "pod-configmaps-1f5db2c3-5a26-47c9-8653-beb601774272": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.155774502s
STEP: Saw pod success
Jul  5 09:04:22.667: INFO: Pod "pod-configmaps-1f5db2c3-5a26-47c9-8653-beb601774272" satisfied condition "Succeeded or Failed"
Jul  5 09:04:22.697: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod pod-configmaps-1f5db2c3-5a26-47c9-8653-beb601774272 container configmap-volume-test: <nil>
STEP: delete the pod
Jul  5 09:04:22.761: INFO: Waiting for pod pod-configmaps-1f5db2c3-5a26-47c9-8653-beb601774272 to disappear
Jul  5 09:04:22.791: INFO: Pod pod-configmaps-1f5db2c3-5a26-47c9-8653-beb601774272 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.554 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:22.917: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 27 lines ...
STEP: Creating a kubernetes client
Jul  5 09:04:19.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Jul  5 09:04:19.933: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:24.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1007" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":3,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:24.133: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 74 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:04:07.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:25.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5486" for this suite.


• [SLOW TEST:18.276 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 78 lines ...
Jul  5 09:04:24.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  5 09:04:24.391: INFO: Waiting up to 5m0s for pod "pod-7dfd122d-b836-410e-830e-0758ee6a9550" in namespace "emptydir-9168" to be "Succeeded or Failed"
Jul  5 09:04:24.422: INFO: Pod "pod-7dfd122d-b836-410e-830e-0758ee6a9550": Phase="Pending", Reason="", readiness=false. Elapsed: 30.507488ms
Jul  5 09:04:26.452: INFO: Pod "pod-7dfd122d-b836-410e-830e-0758ee6a9550": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060507741s
STEP: Saw pod success
Jul  5 09:04:26.452: INFO: Pod "pod-7dfd122d-b836-410e-830e-0758ee6a9550" satisfied condition "Succeeded or Failed"
Jul  5 09:04:26.480: INFO: Trying to get logs from node ip-172-20-38-136.us-east-2.compute.internal pod pod-7dfd122d-b836-410e-830e-0758ee6a9550 container test-container: <nil>
STEP: delete the pod
Jul  5 09:04:26.544: INFO: Waiting for pod pod-7dfd122d-b836-410e-830e-0758ee6a9550 to disappear
Jul  5 09:04:26.576: INFO: Pod pod-7dfd122d-b836-410e-830e-0758ee6a9550 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:26.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9168" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:26.681: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 47 lines ...
Jul  5 09:04:20.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Jul  5 09:04:20.667: INFO: Waiting up to 5m0s for pod "test-pod-c3517ccd-22e5-46d2-a6bd-f10224e2fd8b" in namespace "svcaccounts-1670" to be "Succeeded or Failed"
Jul  5 09:04:20.704: INFO: Pod "test-pod-c3517ccd-22e5-46d2-a6bd-f10224e2fd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.706623ms
Jul  5 09:04:22.734: INFO: Pod "test-pod-c3517ccd-22e5-46d2-a6bd-f10224e2fd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066740141s
Jul  5 09:04:24.765: INFO: Pod "test-pod-c3517ccd-22e5-46d2-a6bd-f10224e2fd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096984842s
Jul  5 09:04:26.797: INFO: Pod "test-pod-c3517ccd-22e5-46d2-a6bd-f10224e2fd8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129605955s
STEP: Saw pod success
Jul  5 09:04:26.797: INFO: Pod "test-pod-c3517ccd-22e5-46d2-a6bd-f10224e2fd8b" satisfied condition "Succeeded or Failed"
Jul  5 09:04:26.828: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod test-pod-c3517ccd-22e5-46d2-a6bd-f10224e2fd8b container agnhost-container: <nil>
STEP: delete the pod
Jul  5 09:04:26.892: INFO: Waiting for pod test-pod-c3517ccd-22e5-46d2-a6bd-f10224e2fd8b to disappear
Jul  5 09:04:26.921: INFO: Pod test-pod-c3517ccd-22e5-46d2-a6bd-f10224e2fd8b no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.495 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  5 09:04:20.993: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df16ad1c-06ad-4e0f-9ae1-5be18aa760f1" in namespace "downward-api-2599" to be "Succeeded or Failed"
Jul  5 09:04:21.025: INFO: Pod "downwardapi-volume-df16ad1c-06ad-4e0f-9ae1-5be18aa760f1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.340652ms
Jul  5 09:04:23.056: INFO: Pod "downwardapi-volume-df16ad1c-06ad-4e0f-9ae1-5be18aa760f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062876844s
Jul  5 09:04:25.086: INFO: Pod "downwardapi-volume-df16ad1c-06ad-4e0f-9ae1-5be18aa760f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093037409s
Jul  5 09:04:27.121: INFO: Pod "downwardapi-volume-df16ad1c-06ad-4e0f-9ae1-5be18aa760f1": Phase="Running", Reason="", readiness=true. Elapsed: 6.128035122s
Jul  5 09:04:29.152: INFO: Pod "downwardapi-volume-df16ad1c-06ad-4e0f-9ae1-5be18aa760f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158633573s
STEP: Saw pod success
Jul  5 09:04:29.152: INFO: Pod "downwardapi-volume-df16ad1c-06ad-4e0f-9ae1-5be18aa760f1" satisfied condition "Succeeded or Failed"
Jul  5 09:04:29.182: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod downwardapi-volume-df16ad1c-06ad-4e0f-9ae1-5be18aa760f1 container client-container: <nil>
STEP: delete the pod
Jul  5 09:04:29.248: INFO: Waiting for pod downwardapi-volume-df16ad1c-06ad-4e0f-9ae1-5be18aa760f1 to disappear
Jul  5 09:04:29.279: INFO: Pod downwardapi-volume-df16ad1c-06ad-4e0f-9ae1-5be18aa760f1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.534 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:29.378: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:31.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9222" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":4,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:31.650: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:33.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2139" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:34.027: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 79 lines ...
• [SLOW TEST:8.655 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:136
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":5,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:35.607: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
Jul  5 09:04:09.291: INFO: PersistentVolumeClaim pvc-x7sgq found but phase is Pending instead of Bound.
Jul  5 09:04:11.323: INFO: PersistentVolumeClaim pvc-x7sgq found and phase=Bound (14.242107436s)
Jul  5 09:04:11.323: INFO: Waiting up to 3m0s for PersistentVolume local-p6swp to have phase Bound
Jul  5 09:04:11.352: INFO: PersistentVolume local-p6swp found and phase=Bound (29.230074ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-r6jf
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 09:04:11.442: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-r6jf" in namespace "provisioning-6421" to be "Succeeded or Failed"
Jul  5 09:04:11.471: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf": Phase="Pending", Reason="", readiness=false. Elapsed: 29.642147ms
Jul  5 09:04:13.502: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060173012s
Jul  5 09:04:15.531: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089497782s
Jul  5 09:04:17.561: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119258958s
Jul  5 09:04:19.592: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf": Phase="Running", Reason="", readiness=true. Elapsed: 8.150751778s
Jul  5 09:04:21.621: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf": Phase="Running", Reason="", readiness=true. Elapsed: 10.179832499s
... skipping 2 lines ...
Jul  5 09:04:27.710: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf": Phase="Running", Reason="", readiness=true. Elapsed: 16.268642752s
Jul  5 09:04:29.740: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf": Phase="Running", Reason="", readiness=true. Elapsed: 18.29841866s
Jul  5 09:04:31.772: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf": Phase="Running", Reason="", readiness=true. Elapsed: 20.330045845s
Jul  5 09:04:33.802: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf": Phase="Running", Reason="", readiness=true. Elapsed: 22.360109888s
Jul  5 09:04:35.831: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.389075936s
STEP: Saw pod success
Jul  5 09:04:35.831: INFO: Pod "pod-subpath-test-preprovisionedpv-r6jf" satisfied condition "Succeeded or Failed"
Jul  5 09:04:35.863: INFO: Trying to get logs from node ip-172-20-52-221.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-r6jf container test-container-subpath-preprovisionedpv-r6jf: <nil>
STEP: delete the pod
Jul  5 09:04:35.937: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-r6jf to disappear
Jul  5 09:04:35.968: INFO: Pod pod-subpath-test-preprovisionedpv-r6jf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-r6jf
Jul  5 09:04:35.968: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-r6jf" in namespace "provisioning-6421"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:36.502: INFO: Only supported for providers [vsphere] (not aws)
... skipping 41 lines ...
• [SLOW TEST:11.450 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:481
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":1,"skipped":33,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:38.080: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
      Driver "local" does not provide raw block - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:04:06.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 59 lines ...
• [SLOW TEST:33.877 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:132
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:04:40.642: INFO: >>> kubeConfig: /root/.kube/config
... skipping 42 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:04:24.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:41.073: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 88 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-303b6253-316f-4cb6-8152-7132f8826503
STEP: Creating a pod to test consume configMaps
Jul  5 09:04:36.765: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3c7f414-4bc3-4d26-9e96-09831d8ba6a7" in namespace "projected-4563" to be "Succeeded or Failed"
Jul  5 09:04:36.794: INFO: Pod "pod-projected-configmaps-b3c7f414-4bc3-4d26-9e96-09831d8ba6a7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.959269ms
Jul  5 09:04:38.824: INFO: Pod "pod-projected-configmaps-b3c7f414-4bc3-4d26-9e96-09831d8ba6a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05879676s
Jul  5 09:04:40.856: INFO: Pod "pod-projected-configmaps-b3c7f414-4bc3-4d26-9e96-09831d8ba6a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090594512s
Jul  5 09:04:42.886: INFO: Pod "pod-projected-configmaps-b3c7f414-4bc3-4d26-9e96-09831d8ba6a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121098971s
STEP: Saw pod success
Jul  5 09:04:42.887: INFO: Pod "pod-projected-configmaps-b3c7f414-4bc3-4d26-9e96-09831d8ba6a7" satisfied condition "Succeeded or Failed"
Jul  5 09:04:42.915: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-projected-configmaps-b3c7f414-4bc3-4d26-9e96-09831d8ba6a7 container agnhost-container: <nil>
STEP: delete the pod
Jul  5 09:04:42.979: INFO: Waiting for pod pod-projected-configmaps-b3c7f414-4bc3-4d26-9e96-09831d8ba6a7 to disappear
Jul  5 09:04:43.007: INFO: Pod pod-projected-configmaps-b3c7f414-4bc3-4d26-9e96-09831d8ba6a7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.514 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":23,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:43.117: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 203 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:45.089: INFO: Driver emptydir doesn't support ext3 -- skipping
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:45.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9322" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:45.700: INFO: Driver aws doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 135 lines ...
Jul  5 09:04:25.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
Jul  5 09:04:25.597: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  5 09:04:25.663: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2196" in namespace "provisioning-2196" to be "Succeeded or Failed"
Jul  5 09:04:25.693: INFO: Pod "hostpath-symlink-prep-provisioning-2196": Phase="Pending", Reason="", readiness=false. Elapsed: 30.025288ms
Jul  5 09:04:27.724: INFO: Pod "hostpath-symlink-prep-provisioning-2196": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0607918s
Jul  5 09:04:29.754: INFO: Pod "hostpath-symlink-prep-provisioning-2196": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091268609s
STEP: Saw pod success
Jul  5 09:04:29.754: INFO: Pod "hostpath-symlink-prep-provisioning-2196" satisfied condition "Succeeded or Failed"
Jul  5 09:04:29.755: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2196" in namespace "provisioning-2196"
Jul  5 09:04:29.790: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2196" to be fully deleted
Jul  5 09:04:29.820: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-fbs8
Jul  5 09:04:35.912: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-2196 exec pod-subpath-test-inlinevolume-fbs8 --container test-container-volume-inlinevolume-fbs8 -- /bin/sh -c rm -r /test-volume/provisioning-2196'
Jul  5 09:04:36.403: INFO: stderr: ""
Jul  5 09:04:36.403: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-fbs8
Jul  5 09:04:36.403: INFO: Deleting pod "pod-subpath-test-inlinevolume-fbs8" in namespace "provisioning-2196"
Jul  5 09:04:36.439: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-fbs8" to be fully deleted
STEP: Deleting pod
Jul  5 09:04:40.500: INFO: Deleting pod "pod-subpath-test-inlinevolume-fbs8" in namespace "provisioning-2196"
Jul  5 09:04:40.561: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2196" in namespace "provisioning-2196" to be "Succeeded or Failed"
Jul  5 09:04:40.592: INFO: Pod "hostpath-symlink-prep-provisioning-2196": Phase="Pending", Reason="", readiness=false. Elapsed: 31.515916ms
Jul  5 09:04:42.624: INFO: Pod "hostpath-symlink-prep-provisioning-2196": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06283794s
Jul  5 09:04:44.654: INFO: Pod "hostpath-symlink-prep-provisioning-2196": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093590712s
Jul  5 09:04:46.693: INFO: Pod "hostpath-symlink-prep-provisioning-2196": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131978482s
STEP: Saw pod success
Jul  5 09:04:46.693: INFO: Pod "hostpath-symlink-prep-provisioning-2196" satisfied condition "Succeeded or Failed"
Jul  5 09:04:46.693: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2196" in namespace "provisioning-2196"
Jul  5 09:04:46.730: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2196" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:46.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2196" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":4,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:04:46.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apply
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Destroying namespace "apply-6536" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Multi-AZ Cluster Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
Jul  5 09:04:41.089: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-a3b8d96d-969a-4549-8136-09f67d94699e" in namespace "security-context-test-7403" to be "Succeeded or Failed"
Jul  5 09:04:41.119: INFO: Pod "alpine-nnp-nil-a3b8d96d-969a-4549-8136-09f67d94699e": Phase="Pending", Reason="", readiness=false. Elapsed: 29.635354ms
Jul  5 09:04:43.150: INFO: Pod "alpine-nnp-nil-a3b8d96d-969a-4549-8136-09f67d94699e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061497763s
Jul  5 09:04:45.182: INFO: Pod "alpine-nnp-nil-a3b8d96d-969a-4549-8136-09f67d94699e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092692198s
Jul  5 09:04:47.212: INFO: Pod "alpine-nnp-nil-a3b8d96d-969a-4549-8136-09f67d94699e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12310319s
Jul  5 09:04:47.212: INFO: Pod "alpine-nnp-nil-a3b8d96d-969a-4549-8136-09f67d94699e" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:47.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7403" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":20,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:47.367: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
STEP: Creating configMap with name projected-configmap-test-volume-9f198bf4-1ce6-4191-90af-4f3ce9b1507f
STEP: Creating a pod to test consume configMaps
Jul  5 09:04:43.446: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5c120170-e488-4e09-9674-4e640ab652c9" in namespace "projected-9007" to be "Succeeded or Failed"
Jul  5 09:04:43.475: INFO: Pod "pod-projected-configmaps-5c120170-e488-4e09-9674-4e640ab652c9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.78854ms
Jul  5 09:04:45.505: INFO: Pod "pod-projected-configmaps-5c120170-e488-4e09-9674-4e640ab652c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058584894s
Jul  5 09:04:47.543: INFO: Pod "pod-projected-configmaps-5c120170-e488-4e09-9674-4e640ab652c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096484883s
Jul  5 09:04:49.573: INFO: Pod "pod-projected-configmaps-5c120170-e488-4e09-9674-4e640ab652c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126629761s
STEP: Saw pod success
Jul  5 09:04:49.573: INFO: Pod "pod-projected-configmaps-5c120170-e488-4e09-9674-4e640ab652c9" satisfied condition "Succeeded or Failed"
Jul  5 09:04:49.602: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-projected-configmaps-5c120170-e488-4e09-9674-4e640ab652c9 container agnhost-container: <nil>
STEP: delete the pod
Jul  5 09:04:49.667: INFO: Waiting for pod pod-projected-configmaps-5c120170-e488-4e09-9674-4e640ab652c9 to disappear
Jul  5 09:04:49.695: INFO: Pod pod-projected-configmaps-5c120170-e488-4e09-9674-4e640ab652c9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.516 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":51,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should list, patch and delete a collection of StatefulSets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:895
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets","total":-1,"completed":7,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:34.407 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be ready immediately after startupProbe succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:406
------------------------------
{"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":2,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:50.789: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:04:55.547: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 129 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:439
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":2,"skipped":50,"failed":0}
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:04:56.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename endpointslice
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:04:57.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-4719" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":3,"skipped":50,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 26 lines ...
Jul  5 09:04:53.685: INFO: PersistentVolumeClaim pvc-tgr8r found but phase is Pending instead of Bound.
Jul  5 09:04:55.719: INFO: PersistentVolumeClaim pvc-tgr8r found and phase=Bound (14.266688217s)
Jul  5 09:04:55.719: INFO: Waiting up to 3m0s for PersistentVolume local-h2lwn to have phase Bound
Jul  5 09:04:55.749: INFO: PersistentVolume local-h2lwn found and phase=Bound (30.154626ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xbpg
STEP: Creating a pod to test subpath
Jul  5 09:04:55.878: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xbpg" in namespace "provisioning-1849" to be "Succeeded or Failed"
Jul  5 09:04:55.917: INFO: Pod "pod-subpath-test-preprovisionedpv-xbpg": Phase="Pending", Reason="", readiness=false. Elapsed: 39.413142ms
Jul  5 09:04:57.949: INFO: Pod "pod-subpath-test-preprovisionedpv-xbpg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071077244s
Jul  5 09:04:59.980: INFO: Pod "pod-subpath-test-preprovisionedpv-xbpg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102524162s
Jul  5 09:05:02.012: INFO: Pod "pod-subpath-test-preprovisionedpv-xbpg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134327947s
Jul  5 09:05:04.043: INFO: Pod "pod-subpath-test-preprovisionedpv-xbpg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.164996681s
STEP: Saw pod success
Jul  5 09:05:04.043: INFO: Pod "pod-subpath-test-preprovisionedpv-xbpg" satisfied condition "Succeeded or Failed"
Jul  5 09:05:04.073: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-xbpg container test-container-volume-preprovisionedpv-xbpg: <nil>
STEP: delete the pod
Jul  5 09:05:04.144: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xbpg to disappear
Jul  5 09:05:04.174: INFO: Pod pod-subpath-test-preprovisionedpv-xbpg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xbpg
Jul  5 09:05:04.174: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xbpg" in namespace "provisioning-1849"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":38,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:05.506: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:05:05.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6747" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":4,"skipped":43,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:05.839: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 158 lines ...
• [SLOW TEST:11.480 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:07.337: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 40 lines ...
• [SLOW TEST:20.512 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777
------------------------------
{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":5,"skipped":58,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:10.364: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 102 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support inline execution and attach
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:555
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:11.513: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 87 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:04:22.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 45 lines ...
Jul  5 09:04:33.304: INFO: PersistentVolumeClaim pvc-ckntc found but phase is Pending instead of Bound.
Jul  5 09:04:35.338: INFO: PersistentVolumeClaim pvc-ckntc found and phase=Bound (2.074221944s)
STEP: Deleting the previously created pod
Jul  5 09:04:45.493: INFO: Deleting pod "pvc-volume-tester-zh4v9" in namespace "csi-mock-volumes-1862"
Jul  5 09:04:45.528: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zh4v9" to be fully deleted
STEP: Checking CSI driver logs
Jul  5 09:04:53.630: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/d6819d17-aeb6-44e9-8f80-26656e2b1a35/volumes/kubernetes.io~csi/pvc-8adae8f7-9db9-4edf-8afa-ecad04ba31ea/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-zh4v9
Jul  5 09:04:53.631: INFO: Deleting pod "pvc-volume-tester-zh4v9" in namespace "csi-mock-volumes-1862"
STEP: Deleting claim pvc-ckntc
Jul  5 09:04:53.721: INFO: Waiting up to 2m0s for PersistentVolume pvc-8adae8f7-9db9-4edf-8afa-ecad04ba31ea to get deleted
Jul  5 09:04:53.751: INFO: PersistentVolume pvc-8adae8f7-9db9-4edf-8afa-ecad04ba31ea found and phase=Released (30.455511ms)
Jul  5 09:04:55.788: INFO: PersistentVolume pvc-8adae8f7-9db9-4edf-8afa-ecad04ba31ea was removed
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should not be passed when podInfoOnMount=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":5,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:15.089: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 69 lines ...
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Jul  5 09:05:15.467: INFO: boom-server pod logs: 2021/07/05 09:04:01 external ip: 100.96.4.6
2021/07/05 09:04:01 listen on 0.0.0.0:9000
2021/07/05 09:04:01 probing 100.96.4.6

Jul  5 09:05:15.467: FAIL: Boom server pod did not sent any bad packet to the client

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00037f500)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00037f500)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b
... skipping 253 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:290

  Jul  5 09:05:15.467: Boom server pod did not sent any bad packet to the client

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":0,"skipped":12,"failed":1,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:17.841: INFO: Only supported for providers [openstack] (not aws)
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":5,"skipped":9,"failed":0}
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:05:12.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
• [SLOW TEST:7.975 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":6,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:20.001: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  5 09:05:20.200: INFO: Waiting up to 5m0s for pod "pod-c3bdd8e6-a23d-471c-8f47-4031577927fb" in namespace "emptydir-3304" to be "Succeeded or Failed"
Jul  5 09:05:20.229: INFO: Pod "pod-c3bdd8e6-a23d-471c-8f47-4031577927fb": Phase="Pending", Reason="", readiness=false. Elapsed: 29.499245ms
Jul  5 09:05:22.259: INFO: Pod "pod-c3bdd8e6-a23d-471c-8f47-4031577927fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058881667s
STEP: Saw pod success
Jul  5 09:05:22.259: INFO: Pod "pod-c3bdd8e6-a23d-471c-8f47-4031577927fb" satisfied condition "Succeeded or Failed"
Jul  5 09:05:22.288: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-c3bdd8e6-a23d-471c-8f47-4031577927fb container test-container: <nil>
STEP: delete the pod
Jul  5 09:05:22.367: INFO: Waiting for pod pod-c3bdd8e6-a23d-471c-8f47-4031577927fb to disappear
Jul  5 09:05:22.396: INFO: Pod pod-c3bdd8e6-a23d-471c-8f47-4031577927fb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:05:22.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3304" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":7,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 110 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":3,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:05:22.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-b8f8fa4b-1fd1-49e9-95f8-641374794924
STEP: Creating a pod to test consume secrets
Jul  5 09:05:22.795: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a360863b-c865-4c4c-9e87-e12aa1d1a191" in namespace "projected-6074" to be "Succeeded or Failed"
Jul  5 09:05:22.824: INFO: Pod "pod-projected-secrets-a360863b-c865-4c4c-9e87-e12aa1d1a191": Phase="Pending", Reason="", readiness=false. Elapsed: 28.835645ms
Jul  5 09:05:24.853: INFO: Pod "pod-projected-secrets-a360863b-c865-4c4c-9e87-e12aa1d1a191": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058037928s
STEP: Saw pod success
Jul  5 09:05:24.853: INFO: Pod "pod-projected-secrets-a360863b-c865-4c4c-9e87-e12aa1d1a191" satisfied condition "Succeeded or Failed"
Jul  5 09:05:24.882: INFO: Trying to get logs from node ip-172-20-52-221.us-east-2.compute.internal pod pod-projected-secrets-a360863b-c865-4c4c-9e87-e12aa1d1a191 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul  5 09:05:24.950: INFO: Waiting for pod pod-projected-secrets-a360863b-c865-4c4c-9e87-e12aa1d1a191 to disappear
Jul  5 09:05:24.980: INFO: Pod pod-projected-secrets-a360863b-c865-4c4c-9e87-e12aa1d1a191 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:05:24.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6074" for this suite.
STEP: Destroying namespace "secret-namespace-9242" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":8,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:25.089: INFO: Only supported for providers [vsphere] (not aws)
... skipping 43 lines ...
• [SLOW TEST:11.573 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":1,"skipped":24,"failed":1,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:29.461: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 51 lines ...
STEP: Destroying namespace "services-1622" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":2,"skipped":27,"failed":1,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:29.744: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 26 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul  5 09:05:06.006: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  5 09:05:06.006: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-24hb
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 09:05:06.039: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-24hb" in namespace "provisioning-859" to be "Succeeded or Failed"
Jul  5 09:05:06.070: INFO: Pod "pod-subpath-test-inlinevolume-24hb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.850146ms
Jul  5 09:05:08.101: INFO: Pod "pod-subpath-test-inlinevolume-24hb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061621438s
Jul  5 09:05:10.131: INFO: Pod "pod-subpath-test-inlinevolume-24hb": Phase="Running", Reason="", readiness=true. Elapsed: 4.092205597s
Jul  5 09:05:12.163: INFO: Pod "pod-subpath-test-inlinevolume-24hb": Phase="Running", Reason="", readiness=true. Elapsed: 6.123400262s
Jul  5 09:05:14.194: INFO: Pod "pod-subpath-test-inlinevolume-24hb": Phase="Running", Reason="", readiness=true. Elapsed: 8.154850421s
Jul  5 09:05:16.224: INFO: Pod "pod-subpath-test-inlinevolume-24hb": Phase="Running", Reason="", readiness=true. Elapsed: 10.185220344s
... skipping 2 lines ...
Jul  5 09:05:22.318: INFO: Pod "pod-subpath-test-inlinevolume-24hb": Phase="Running", Reason="", readiness=true. Elapsed: 16.279040923s
Jul  5 09:05:24.350: INFO: Pod "pod-subpath-test-inlinevolume-24hb": Phase="Running", Reason="", readiness=true. Elapsed: 18.31049112s
Jul  5 09:05:26.381: INFO: Pod "pod-subpath-test-inlinevolume-24hb": Phase="Running", Reason="", readiness=true. Elapsed: 20.341486038s
Jul  5 09:05:28.411: INFO: Pod "pod-subpath-test-inlinevolume-24hb": Phase="Running", Reason="", readiness=true. Elapsed: 22.371778187s
Jul  5 09:05:30.442: INFO: Pod "pod-subpath-test-inlinevolume-24hb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.402454821s
STEP: Saw pod success
Jul  5 09:05:30.442: INFO: Pod "pod-subpath-test-inlinevolume-24hb" satisfied condition "Succeeded or Failed"
Jul  5 09:05:30.472: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-subpath-test-inlinevolume-24hb container test-container-subpath-inlinevolume-24hb: <nil>
STEP: delete the pod
Jul  5 09:05:30.595: INFO: Waiting for pod pod-subpath-test-inlinevolume-24hb to disappear
Jul  5 09:05:30.628: INFO: Pod pod-subpath-test-inlinevolume-24hb no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-24hb
Jul  5 09:05:30.628: INFO: Deleting pod "pod-subpath-test-inlinevolume-24hb" in namespace "provisioning-859"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":50,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 18 lines ...
Jul  5 09:04:52.829: INFO: PersistentVolumeClaim pvc-xltmt found but phase is Pending instead of Bound.
Jul  5 09:04:54.860: INFO: PersistentVolumeClaim pvc-xltmt found and phase=Bound (8.155454165s)
Jul  5 09:04:54.861: INFO: Waiting up to 3m0s for PersistentVolume aws-s6bb5 to have phase Bound
Jul  5 09:04:54.893: INFO: PersistentVolume aws-s6bb5 found and phase=Bound (32.552956ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-ncvq
STEP: Creating a pod to test exec-volume-test
Jul  5 09:04:54.982: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-ncvq" in namespace "volume-2828" to be "Succeeded or Failed"
Jul  5 09:04:55.012: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq": Phase="Pending", Reason="", readiness=false. Elapsed: 29.077781ms
Jul  5 09:04:57.041: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058814605s
Jul  5 09:04:59.071: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088360983s
Jul  5 09:05:01.101: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118340409s
Jul  5 09:05:03.133: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150005627s
Jul  5 09:05:05.162: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179450042s
Jul  5 09:05:07.193: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.210243537s
Jul  5 09:05:09.223: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.24096515s
Jul  5 09:05:11.254: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.271342896s
Jul  5 09:05:13.285: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.302753731s
Jul  5 09:05:15.315: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.332682009s
STEP: Saw pod success
Jul  5 09:05:15.315: INFO: Pod "exec-volume-test-preprovisionedpv-ncvq" satisfied condition "Succeeded or Failed"
Jul  5 09:05:15.345: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod exec-volume-test-preprovisionedpv-ncvq container exec-container-preprovisionedpv-ncvq: <nil>
STEP: delete the pod
Jul  5 09:05:15.410: INFO: Waiting for pod exec-volume-test-preprovisionedpv-ncvq to disappear
Jul  5 09:05:15.440: INFO: Pod exec-volume-test-preprovisionedpv-ncvq no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-ncvq
Jul  5 09:05:15.440: INFO: Deleting pod "exec-volume-test-preprovisionedpv-ncvq" in namespace "volume-2828"
STEP: Deleting pv and pvc
Jul  5 09:05:15.469: INFO: Deleting PersistentVolumeClaim "pvc-xltmt"
Jul  5 09:05:15.499: INFO: Deleting PersistentVolume "aws-s6bb5"
Jul  5 09:05:15.655: INFO: Couldn't delete PD "aws://us-east-2a/vol-0df09e6e1c89f182b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0df09e6e1c89f182b is currently attached to i-07289ac18778d31dd
	status code: 400, request id: 186aa79b-eae4-4166-b826-dfe27e2fe0e4
Jul  5 09:05:20.901: INFO: Couldn't delete PD "aws://us-east-2a/vol-0df09e6e1c89f182b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0df09e6e1c89f182b is currently attached to i-07289ac18778d31dd
	status code: 400, request id: c32ad263-db23-4f61-a34f-5b7881665dc5
Jul  5 09:05:26.147: INFO: Couldn't delete PD "aws://us-east-2a/vol-0df09e6e1c89f182b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0df09e6e1c89f182b is currently attached to i-07289ac18778d31dd
	status code: 400, request id: d40e25f8-596f-4c5f-87ce-a1003c0d6236
Jul  5 09:05:31.399: INFO: Successfully deleted PD "aws://us-east-2a/vol-0df09e6e1c89f182b".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:05:31.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2828" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":46,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":1,"skipped":50,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:05:07.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 87 lines ...
• [SLOW TEST:103.068 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:216
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":1,"skipped":6,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:33.249: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 90 lines ...
• [SLOW TEST:5.392 seconds]
[sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":3,"skipped":30,"failed":1,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:35.188: INFO: Only supported for providers [gce gke] (not aws)
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Jul  5 09:05:32.263: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5279" to be "Succeeded or Failed"
Jul  5 09:05:32.291: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 28.551649ms
Jul  5 09:05:34.320: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057728107s
Jul  5 09:05:36.353: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090245857s
STEP: Saw pod success
Jul  5 09:05:36.353: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul  5 09:05:36.384: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Jul  5 09:05:36.456: INFO: Waiting for pod pod-host-path-test to disappear
Jul  5 09:05:36.488: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:05:36.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5279" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":2,"skipped":54,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:6.483 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":4,"skipped":36,"failed":1,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries"]}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:6.565 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":5,"skipped":39,"failed":1,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:48.294: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 144 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity used, no capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":6,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:50.787: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 61 lines ...
Jul  5 09:05:37.874: INFO: PersistentVolumeClaim pvc-nptb8 found but phase is Pending instead of Bound.
Jul  5 09:05:39.904: INFO: PersistentVolumeClaim pvc-nptb8 found and phase=Bound (12.216919225s)
Jul  5 09:05:39.904: INFO: Waiting up to 3m0s for PersistentVolume local-b74l9 to have phase Bound
Jul  5 09:05:39.933: INFO: PersistentVolume local-b74l9 found and phase=Bound (28.980118ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mmm5
STEP: Creating a pod to test subpath
Jul  5 09:05:40.022: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mmm5" in namespace "provisioning-1760" to be "Succeeded or Failed"
Jul  5 09:05:40.052: INFO: Pod "pod-subpath-test-preprovisionedpv-mmm5": Phase="Pending", Reason="", readiness=false. Elapsed: 29.328884ms
Jul  5 09:05:42.083: INFO: Pod "pod-subpath-test-preprovisionedpv-mmm5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060023189s
Jul  5 09:05:44.113: INFO: Pod "pod-subpath-test-preprovisionedpv-mmm5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090754381s
Jul  5 09:05:46.143: INFO: Pod "pod-subpath-test-preprovisionedpv-mmm5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120859113s
Jul  5 09:05:48.174: INFO: Pod "pod-subpath-test-preprovisionedpv-mmm5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151244164s
STEP: Saw pod success
Jul  5 09:05:48.174: INFO: Pod "pod-subpath-test-preprovisionedpv-mmm5" satisfied condition "Succeeded or Failed"
Jul  5 09:05:48.203: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-mmm5 container test-container-subpath-preprovisionedpv-mmm5: <nil>
STEP: delete the pod
Jul  5 09:05:48.266: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mmm5 to disappear
Jul  5 09:05:48.296: INFO: Pod pod-subpath-test-preprovisionedpv-mmm5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mmm5
Jul  5 09:05:48.296: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mmm5" in namespace "provisioning-1760"
STEP: Creating pod pod-subpath-test-preprovisionedpv-mmm5
STEP: Creating a pod to test subpath
Jul  5 09:05:48.354: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mmm5" in namespace "provisioning-1760" to be "Succeeded or Failed"
Jul  5 09:05:48.384: INFO: Pod "pod-subpath-test-preprovisionedpv-mmm5": Phase="Pending", Reason="", readiness=false. Elapsed: 29.338849ms
Jul  5 09:05:50.414: INFO: Pod "pod-subpath-test-preprovisionedpv-mmm5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059202284s
Jul  5 09:05:52.443: INFO: Pod "pod-subpath-test-preprovisionedpv-mmm5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088479681s
STEP: Saw pod success
Jul  5 09:05:52.443: INFO: Pod "pod-subpath-test-preprovisionedpv-mmm5" satisfied condition "Succeeded or Failed"
Jul  5 09:05:52.473: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-mmm5 container test-container-subpath-preprovisionedpv-mmm5: <nil>
STEP: delete the pod
Jul  5 09:05:52.540: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mmm5 to disappear
Jul  5 09:05:52.570: INFO: Pod pod-subpath-test-preprovisionedpv-mmm5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mmm5
Jul  5 09:05:52.570: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mmm5" in namespace "provisioning-1760"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":9,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:53.154: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:05:54.941: INFO: Only supported for providers [gce gke] (not aws)
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":2,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:06:03.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4760" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:06:03.701: INFO: Driver local doesn't support ext4 -- skipping
... skipping 65 lines ...
Jul  5 09:05:49.151: INFO: PersistentVolumeClaim pvc-2vvdg found and phase=Bound (28.558204ms)
Jul  5 09:05:49.151: INFO: Waiting up to 3m0s for PersistentVolume nfs-vflpb to have phase Bound
Jul  5 09:05:49.180: INFO: PersistentVolume nfs-vflpb found and phase=Bound (29.016913ms)
STEP: Checking pod has write access to PersistentVolume
Jul  5 09:05:49.238: INFO: Creating nfs test pod
Jul  5 09:05:49.267: INFO: Pod should terminate with exitcode 0 (success)
Jul  5 09:05:49.267: INFO: Waiting up to 5m0s for pod "pvc-tester-2zx2x" in namespace "pv-7533" to be "Succeeded or Failed"
Jul  5 09:05:49.297: INFO: Pod "pvc-tester-2zx2x": Phase="Pending", Reason="", readiness=false. Elapsed: 30.134214ms
Jul  5 09:05:51.328: INFO: Pod "pvc-tester-2zx2x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061268853s
Jul  5 09:05:53.359: INFO: Pod "pvc-tester-2zx2x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092422341s
STEP: Saw pod success
Jul  5 09:05:53.360: INFO: Pod "pvc-tester-2zx2x" satisfied condition "Succeeded or Failed"
Jul  5 09:05:53.360: INFO: Pod pvc-tester-2zx2x succeeded 
Jul  5 09:05:53.360: INFO: Deleting pod "pvc-tester-2zx2x" in namespace "pv-7533"
Jul  5 09:05:53.392: INFO: Wait up to 5m0s for pod "pvc-tester-2zx2x" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jul  5 09:05:53.423: INFO: Deleting PVC pvc-2vvdg to trigger reclamation of PV 
Jul  5 09:05:53.423: INFO: Deleting PersistentVolumeClaim "pvc-2vvdg"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":3,"skipped":62,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  5 09:06:03.938: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6fcd6ae7-4949-4c09-a2a4-72a6f0c46633" in namespace "projected-234" to be "Succeeded or Failed"
Jul  5 09:06:03.969: INFO: Pod "downwardapi-volume-6fcd6ae7-4949-4c09-a2a4-72a6f0c46633": Phase="Pending", Reason="", readiness=false. Elapsed: 30.556136ms
Jul  5 09:06:06.000: INFO: Pod "downwardapi-volume-6fcd6ae7-4949-4c09-a2a4-72a6f0c46633": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061782659s
STEP: Saw pod success
Jul  5 09:06:06.000: INFO: Pod "downwardapi-volume-6fcd6ae7-4949-4c09-a2a4-72a6f0c46633" satisfied condition "Succeeded or Failed"
Jul  5 09:06:06.031: INFO: Trying to get logs from node ip-172-20-52-221.us-east-2.compute.internal pod downwardapi-volume-6fcd6ae7-4949-4c09-a2a4-72a6f0c46633 container client-container: <nil>
STEP: delete the pod
Jul  5 09:06:06.102: INFO: Waiting for pod downwardapi-volume-6fcd6ae7-4949-4c09-a2a4-72a6f0c46633 to disappear
Jul  5 09:06:06.132: INFO: Pod downwardapi-volume-6fcd6ae7-4949-4c09-a2a4-72a6f0c46633 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:06:06.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-234" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:06:06.206: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Container restart
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130
    should verify that container can restart successfully after configmaps modified
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131
------------------------------
{"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":1,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
Jul  5 09:04:51.488: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5126
Jul  5 09:04:51.519: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5126
Jul  5 09:04:51.552: INFO: creating *v1.StatefulSet: csi-mock-volumes-5126-7187/csi-mockplugin
Jul  5 09:04:51.584: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5126
Jul  5 09:04:51.616: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5126"
Jul  5 09:04:51.646: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5126 to register on node ip-172-20-57-184.us-east-2.compute.internal
I0705 09:04:54.113898   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0705 09:04:54.144505   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5126","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0705 09:04:54.181253   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0705 09:04:54.213548   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0705 09:04:54.312946   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5126","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0705 09:04:55.338657   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5126"},"Error":"","FullError":null}
STEP: Creating pod
Jul  5 09:04:56.804: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  5 09:04:56.836: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-gznt7] to have phase Bound
I0705 09:04:56.854221   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-2caa5c37-7735-40d9-ae89-5d9ada856967","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
Jul  5 09:04:56.865: INFO: PersistentVolumeClaim pvc-gznt7 found but phase is Pending instead of Bound.
I0705 09:04:56.888017   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-2caa5c37-7735-40d9-ae89-5d9ada856967","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-2caa5c37-7735-40d9-ae89-5d9ada856967"}}},"Error":"","FullError":null}
Jul  5 09:04:58.896: INFO: PersistentVolumeClaim pvc-gznt7 found and phase=Bound (2.06019108s)
I0705 09:04:59.179837   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul  5 09:04:59.210: INFO: >>> kubeConfig: /root/.kube/config
I0705 09:04:59.474356   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2caa5c37-7735-40d9-ae89-5d9ada856967/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-2caa5c37-7735-40d9-ae89-5d9ada856967","storage.kubernetes.io/csiProvisionerIdentity":"1625475894223-8081-csi-mock-csi-mock-volumes-5126"}},"Response":{},"Error":"","FullError":null}
I0705 09:04:59.506950   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul  5 09:04:59.540: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 09:04:59.803: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 09:05:00.095: INFO: >>> kubeConfig: /root/.kube/config
I0705 09:05:00.362448   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2caa5c37-7735-40d9-ae89-5d9ada856967/globalmount","target_path":"/var/lib/kubelet/pods/129d5a06-ca28-454e-8728-b69b77440c8b/volumes/kubernetes.io~csi/pvc-2caa5c37-7735-40d9-ae89-5d9ada856967/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-2caa5c37-7735-40d9-ae89-5d9ada856967","storage.kubernetes.io/csiProvisionerIdentity":"1625475894223-8081-csi-mock-csi-mock-volumes-5126"}},"Response":{},"Error":"","FullError":null}
I0705 09:05:00.985820   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0705 09:05:01.018320   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/129d5a06-ca28-454e-8728-b69b77440c8b/volumes/kubernetes.io~csi/pvc-2caa5c37-7735-40d9-ae89-5d9ada856967/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null}
Jul  5 09:05:01.048: INFO: Deleting pod "pvc-volume-tester-vxpv6" in namespace "csi-mock-volumes-5126"
Jul  5 09:05:01.079: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vxpv6" to be fully deleted
Jul  5 09:05:04.018: INFO: >>> kubeConfig: /root/.kube/config
I0705 09:05:04.279503   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/129d5a06-ca28-454e-8728-b69b77440c8b/volumes/kubernetes.io~csi/pvc-2caa5c37-7735-40d9-ae89-5d9ada856967/mount"},"Response":{},"Error":"","FullError":null}
I0705 09:05:04.312918   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0705 09:05:04.342473   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2caa5c37-7735-40d9-ae89-5d9ada856967/globalmount"},"Response":{},"Error":"","FullError":null}
I0705 09:05:13.186286   12667 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Jul  5 09:05:14.170: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gznt7", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5126", SelfLink:"", UID:"2caa5c37-7735-40d9-ae89-5d9ada856967", ResourceVersion:"4331", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761072696, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ed49f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ed4a08), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0032603b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0032603c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  5 09:05:14.170: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gznt7", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5126", SelfLink:"", UID:"2caa5c37-7735-40d9-ae89-5d9ada856967", ResourceVersion:"4332", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761072696, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5126"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002f84780), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002f84798), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002f847b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002f847c8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002fb2a90), VolumeMode:(*v1.PersistentVolumeMode)(0xc002fb2aa0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  5 09:05:14.170: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gznt7", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5126", SelfLink:"", UID:"2caa5c37-7735-40d9-ae89-5d9ada856967", ResourceVersion:"4336", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761072696, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5126"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003197608), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003197620), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003197638), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003197650), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-2caa5c37-7735-40d9-ae89-5d9ada856967", StorageClassName:(*string)(0xc00304d350), VolumeMode:(*v1.PersistentVolumeMode)(0xc00304d360), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  5 09:05:14.170: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gznt7", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5126", SelfLink:"", UID:"2caa5c37-7735-40d9-ae89-5d9ada856967", ResourceVersion:"4337", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761072696, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5126"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003197680), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003197698), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0031976b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0031976c8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0031976e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0031976f8), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-2caa5c37-7735-40d9-ae89-5d9ada856967", StorageClassName:(*string)(0xc00304d390), VolumeMode:(*v1.PersistentVolumeMode)(0xc00304d3a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  5 09:05:14.170: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gznt7", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5126", SelfLink:"", UID:"2caa5c37-7735-40d9-ae89-5d9ada856967", ResourceVersion:"4830", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761072696, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(0xc003c07320), DeletionGracePeriodSeconds:(*int64)(0xc002941188), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5126"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c07338), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c07350), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c07368), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c07380), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c07398), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c073b0), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-2caa5c37-7735-40d9-ae89-5d9ada856967", StorageClassName:(*string)(0xc003373080), VolumeMode:(*v1.PersistentVolumeMode)(0xc003373090), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023
    exhausted, immediate binding
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":8,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":11,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":3,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:06:22.000: INFO: Only supported for providers [openstack] (not aws)
... skipping 84 lines ...
• [SLOW TEST:54.882 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":6,"skipped":54,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:06:25.725: INFO: Only supported for providers [vsphere] (not aws)
... skipping 96 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:06:25.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":7,"skipped":75,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:06:25.945: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 109 lines ...
• [SLOW TEST:60.293 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:133
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":2,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:06:33.046: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 22 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-85ccd679-f35e-4cbd-ae57-544ec4c1b694
STEP: Creating a pod to test consume configMaps
Jul  5 09:06:32.224: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8284ace-de39-4ea9-afe3-9a8b7c239a15" in namespace "configmap-60" to be "Succeeded or Failed"
Jul  5 09:06:32.253: INFO: Pod "pod-configmaps-a8284ace-de39-4ea9-afe3-9a8b7c239a15": Phase="Pending", Reason="", readiness=false. Elapsed: 28.923597ms
Jul  5 09:06:34.283: INFO: Pod "pod-configmaps-a8284ace-de39-4ea9-afe3-9a8b7c239a15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059021107s
STEP: Saw pod success
Jul  5 09:06:34.283: INFO: Pod "pod-configmaps-a8284ace-de39-4ea9-afe3-9a8b7c239a15" satisfied condition "Succeeded or Failed"
Jul  5 09:06:34.312: INFO: Trying to get logs from node ip-172-20-52-221.us-east-2.compute.internal pod pod-configmaps-a8284ace-de39-4ea9-afe3-9a8b7c239a15 container agnhost-container: <nil>
STEP: delete the pod
Jul  5 09:06:34.376: INFO: Waiting for pod pod-configmaps-a8284ace-de39-4ea9-afe3-9a8b7c239a15 to disappear
Jul  5 09:06:34.407: INFO: Pod pod-configmaps-a8284ace-de39-4ea9-afe3-9a8b7c239a15 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:06:34.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-60" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:06:34.496: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 74 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Jul  5 09:06:33.213: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  5 09:06:33.213: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-kq5r
STEP: Creating a pod to test subpath
Jul  5 09:06:33.246: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kq5r" in namespace "provisioning-6548" to be "Succeeded or Failed"
Jul  5 09:06:33.276: INFO: Pod "pod-subpath-test-inlinevolume-kq5r": Phase="Pending", Reason="", readiness=false. Elapsed: 30.690062ms
Jul  5 09:06:35.308: INFO: Pod "pod-subpath-test-inlinevolume-kq5r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062417433s
Jul  5 09:06:37.341: INFO: Pod "pod-subpath-test-inlinevolume-kq5r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095114336s
Jul  5 09:06:39.373: INFO: Pod "pod-subpath-test-inlinevolume-kq5r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127374448s
STEP: Saw pod success
Jul  5 09:06:39.373: INFO: Pod "pod-subpath-test-inlinevolume-kq5r" satisfied condition "Succeeded or Failed"
Jul  5 09:06:39.405: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-subpath-test-inlinevolume-kq5r container test-container-volume-inlinevolume-kq5r: <nil>
STEP: delete the pod
Jul  5 09:06:39.473: INFO: Waiting for pod pod-subpath-test-inlinevolume-kq5r to disappear
Jul  5 09:06:39.504: INFO: Pod pod-subpath-test-inlinevolume-kq5r no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-kq5r
Jul  5 09:06:39.504: INFO: Deleting pod "pod-subpath-test-inlinevolume-kq5r" in namespace "provisioning-6548"
... skipping 42 lines ...
• [SLOW TEST:71.868 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:06:43.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Jul  5 09:06:43.540: INFO: found topology map[topology.kubernetes.io/zone:us-east-2a]
Jul  5 09:06:43.540: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jul  5 09:06:43.540: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 134 lines ...
Jul  5 09:06:03.001: INFO: PersistentVolumeClaim csi-hostpathv9j4n found but phase is Pending instead of Bound.
Jul  5 09:06:05.032: INFO: PersistentVolumeClaim csi-hostpathv9j4n found but phase is Pending instead of Bound.
Jul  5 09:06:07.061: INFO: PersistentVolumeClaim csi-hostpathv9j4n found but phase is Pending instead of Bound.
Jul  5 09:06:09.091: INFO: PersistentVolumeClaim csi-hostpathv9j4n found and phase=Bound (14.237177484s)
STEP: Creating pod pod-subpath-test-dynamicpv-l9cj
STEP: Creating a pod to test subpath
Jul  5 09:06:09.179: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-l9cj" in namespace "provisioning-1968" to be "Succeeded or Failed"
Jul  5 09:06:09.208: INFO: Pod "pod-subpath-test-dynamicpv-l9cj": Phase="Pending", Reason="", readiness=false. Elapsed: 28.915534ms
Jul  5 09:06:11.239: INFO: Pod "pod-subpath-test-dynamicpv-l9cj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059636701s
Jul  5 09:06:13.269: INFO: Pod "pod-subpath-test-dynamicpv-l9cj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08918751s
Jul  5 09:06:15.298: INFO: Pod "pod-subpath-test-dynamicpv-l9cj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11881642s
Jul  5 09:06:17.328: INFO: Pod "pod-subpath-test-dynamicpv-l9cj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.148774072s
STEP: Saw pod success
Jul  5 09:06:17.328: INFO: Pod "pod-subpath-test-dynamicpv-l9cj" satisfied condition "Succeeded or Failed"
Jul  5 09:06:17.357: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod pod-subpath-test-dynamicpv-l9cj container test-container-subpath-dynamicpv-l9cj: <nil>
STEP: delete the pod
Jul  5 09:06:17.434: INFO: Waiting for pod pod-subpath-test-dynamicpv-l9cj to disappear
Jul  5 09:06:17.468: INFO: Pod pod-subpath-test-dynamicpv-l9cj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-l9cj
Jul  5 09:06:17.468: INFO: Deleting pod "pod-subpath-test-dynamicpv-l9cj" in namespace "provisioning-1968"
STEP: Creating pod pod-subpath-test-dynamicpv-l9cj
STEP: Creating a pod to test subpath
Jul  5 09:06:17.556: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-l9cj" in namespace "provisioning-1968" to be "Succeeded or Failed"
Jul  5 09:06:17.590: INFO: Pod "pod-subpath-test-dynamicpv-l9cj": Phase="Pending", Reason="", readiness=false. Elapsed: 33.426577ms
Jul  5 09:06:19.619: INFO: Pod "pod-subpath-test-dynamicpv-l9cj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062926378s
Jul  5 09:06:21.650: INFO: Pod "pod-subpath-test-dynamicpv-l9cj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094061856s
Jul  5 09:06:23.680: INFO: Pod "pod-subpath-test-dynamicpv-l9cj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123565539s
Jul  5 09:06:25.709: INFO: Pod "pod-subpath-test-dynamicpv-l9cj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153062801s
Jul  5 09:06:27.739: INFO: Pod "pod-subpath-test-dynamicpv-l9cj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.18296112s
STEP: Saw pod success
Jul  5 09:06:27.739: INFO: Pod "pod-subpath-test-dynamicpv-l9cj" satisfied condition "Succeeded or Failed"
Jul  5 09:06:27.768: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod pod-subpath-test-dynamicpv-l9cj container test-container-subpath-dynamicpv-l9cj: <nil>
STEP: delete the pod
Jul  5 09:06:27.836: INFO: Waiting for pod pod-subpath-test-dynamicpv-l9cj to disappear
Jul  5 09:06:27.865: INFO: Pod pod-subpath-test-dynamicpv-l9cj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-l9cj
Jul  5 09:06:27.865: INFO: Deleting pod "pod-subpath-test-dynamicpv-l9cj" in namespace "provisioning-1968"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":10,"skipped":17,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
Jul  5 09:06:47.766: INFO: AfterEach: Cleaning up test resources.
Jul  5 09:06:47.766: INFO: pvc is nil
Jul  5 09:06:47.766: INFO: Deleting PersistentVolume "hostpath-zbkcm"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":11,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:06:47.820: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 60 lines ...
STEP: Creating a validating webhook configuration
Jul  5 09:06:07.080: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:06:17.243: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:06:27.344: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:06:37.443: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:06:47.505: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:06:47.505: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 460 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:06:47.506: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc00023e240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:432
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":6,"skipped":30,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:06:50.879: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 47 lines ...
STEP: Destroying namespace "webhook-9740-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

•
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":12,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:06:52.460: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-fd92719b-ddb1-406a-83eb-abd7e6ee89a4
STEP: Creating a pod to test consume secrets
Jul  5 09:06:51.115: INFO: Waiting up to 5m0s for pod "pod-secrets-52ecc1c8-15c3-4830-a7d9-c305d45d85e6" in namespace "secrets-7596" to be "Succeeded or Failed"
Jul  5 09:06:51.145: INFO: Pod "pod-secrets-52ecc1c8-15c3-4830-a7d9-c305d45d85e6": Phase="Pending", Reason="", readiness=false. Elapsed: 29.873531ms
Jul  5 09:06:53.176: INFO: Pod "pod-secrets-52ecc1c8-15c3-4830-a7d9-c305d45d85e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060671253s
STEP: Saw pod success
Jul  5 09:06:53.176: INFO: Pod "pod-secrets-52ecc1c8-15c3-4830-a7d9-c305d45d85e6" satisfied condition "Succeeded or Failed"
Jul  5 09:06:53.206: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-secrets-52ecc1c8-15c3-4830-a7d9-c305d45d85e6 container secret-volume-test: <nil>
STEP: delete the pod
Jul  5 09:06:53.271: INFO: Waiting for pod pod-secrets-52ecc1c8-15c3-4830-a7d9-c305d45d85e6 to disappear
Jul  5 09:06:53.301: INFO: Pod pod-secrets-52ecc1c8-15c3-4830-a7d9-c305d45d85e6 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:06:53.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7596" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":32,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:06:53.380: INFO: Only supported for providers [vsphere] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":35,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  5 09:04:10.106: INFO: Creating ReplicaSet my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c
Jul  5 09:04:10.166: INFO: Pod name my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c: Found 1 pods out of 1
Jul  5 09:04:10.166: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c" is running
Jul  5 09:04:14.231: INFO: Pod "my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c-qwmbd" is running (conditions: [{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-05 09:04:10 +0000 UTC Reason: Message:}])
Jul  5 09:04:14.231: INFO: Trying to dial the pod
Jul  5 09:04:49.325: INFO: Controller my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c: Failed to GET from replica 1 [my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c-qwmbd]: the server is currently unable to handle the request (get pods my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c-qwmbd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761072650, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  5 09:05:24.325: INFO: Controller my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c: Failed to GET from replica 1 [my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c-qwmbd]: the server is currently unable to handle the request (get pods my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c-qwmbd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761072650, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  5 09:05:59.324: INFO: Controller my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c: Failed to GET from replica 1 [my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c-qwmbd]: the server is currently unable to handle the request (get pods my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c-qwmbd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761072650, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  5 09:06:34.324: INFO: Controller my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c: Failed to GET from replica 1 [my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c-qwmbd]: the server is currently unable to handle the request (get pods my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c-qwmbd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761072650, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  5 09:07:04.413: INFO: Controller my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c: Failed to GET from replica 1 [my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c-qwmbd]: the server is currently unable to handle the request (get pods my-hostname-basic-9c31d5b9-1d0a-42e7-8a98-ac304b0a9f7c-qwmbd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761072650, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  5 09:07:04.414: FAIL: Did not get expected responses within the timeout period of 120.00 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func8.1()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:110 +0x57
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0007b9800)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 264 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:07:04.414: Did not get expected responses within the timeout period of 120.00 seconds.

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:110
------------------------------
{"msg":"FAILED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":1,"skipped":4,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":5,"skipped":44,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:8.514 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:265
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":9,"skipped":42,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:10.859: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 242 lines ...
Jul  5 09:06:35.179: INFO: Waiting for pod aws-client to disappear
Jul  5 09:06:35.208: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Jul  5 09:06:35.208: INFO: Deleting PersistentVolumeClaim "pvc-6xdhm"
Jul  5 09:06:35.243: INFO: Deleting PersistentVolume "aws-mxdn9"
Jul  5 09:06:35.519: INFO: Couldn't delete PD "aws://us-east-2a/vol-01ad4fb2a687daa27", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01ad4fb2a687daa27 is currently attached to i-04d472c664b474617
	status code: 400, request id: 073cba92-7a59-4153-b1da-04665b260885
Jul  5 09:06:40.783: INFO: Couldn't delete PD "aws://us-east-2a/vol-01ad4fb2a687daa27", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01ad4fb2a687daa27 is currently attached to i-04d472c664b474617
	status code: 400, request id: eb3918eb-0998-4859-84e9-eb46524e9bc3
Jul  5 09:06:46.003: INFO: Couldn't delete PD "aws://us-east-2a/vol-01ad4fb2a687daa27", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01ad4fb2a687daa27 is currently attached to i-04d472c664b474617
	status code: 400, request id: de6e4086-26f2-4c0a-a918-cce5b254143e
Jul  5 09:06:51.215: INFO: Couldn't delete PD "aws://us-east-2a/vol-01ad4fb2a687daa27", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01ad4fb2a687daa27 is currently attached to i-04d472c664b474617
	status code: 400, request id: 26ef71c5-b701-4222-bf69-b97898f7cc16
Jul  5 09:06:56.452: INFO: Couldn't delete PD "aws://us-east-2a/vol-01ad4fb2a687daa27", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01ad4fb2a687daa27 is currently attached to i-04d472c664b474617
	status code: 400, request id: 952d8581-9d8e-4da2-90bd-dbabb752994c
Jul  5 09:07:01.674: INFO: Couldn't delete PD "aws://us-east-2a/vol-01ad4fb2a687daa27", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01ad4fb2a687daa27 is currently attached to i-04d472c664b474617
	status code: 400, request id: e066ea91-f6ef-4ffd-a457-d74fb7f3cccc
Jul  5 09:07:06.911: INFO: Couldn't delete PD "aws://us-east-2a/vol-01ad4fb2a687daa27", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01ad4fb2a687daa27 is currently attached to i-04d472c664b474617
	status code: 400, request id: a46bb60a-f3be-48ca-89e0-50cfb76f5c22
Jul  5 09:07:12.177: INFO: Successfully deleted PD "aws://us-east-2a/vol-01ad4fb2a687daa27".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:12.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8174" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":6,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
... skipping 64 lines ...
Jul  5 09:06:27.227: INFO: Pod aws-client still exists
Jul  5 09:06:29.199: INFO: Waiting for pod aws-client to disappear
Jul  5 09:06:29.229: INFO: Pod aws-client still exists
Jul  5 09:06:31.198: INFO: Waiting for pod aws-client to disappear
Jul  5 09:06:31.228: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Jul  5 09:06:31.503: INFO: Couldn't delete PD "aws://us-east-2a/vol-0fcbdde00b3e6613b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fcbdde00b3e6613b is currently attached to i-0ffd9423c66cf1001
	status code: 400, request id: e2d42346-c810-4d91-91d2-113b1a214e58
Jul  5 09:06:36.714: INFO: Couldn't delete PD "aws://us-east-2a/vol-0fcbdde00b3e6613b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fcbdde00b3e6613b is currently attached to i-0ffd9423c66cf1001
	status code: 400, request id: 94658eec-8ccc-4bd0-82a2-5c86ec33a572
Jul  5 09:06:41.925: INFO: Couldn't delete PD "aws://us-east-2a/vol-0fcbdde00b3e6613b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fcbdde00b3e6613b is currently attached to i-0ffd9423c66cf1001
	status code: 400, request id: 31f1a01f-b68d-4fcd-832e-f7cc66873e65
Jul  5 09:06:47.156: INFO: Couldn't delete PD "aws://us-east-2a/vol-0fcbdde00b3e6613b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fcbdde00b3e6613b is currently attached to i-0ffd9423c66cf1001
	status code: 400, request id: 59fca403-9707-4e17-a110-39cc956dcec8
Jul  5 09:06:52.377: INFO: Couldn't delete PD "aws://us-east-2a/vol-0fcbdde00b3e6613b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fcbdde00b3e6613b is currently attached to i-0ffd9423c66cf1001
	status code: 400, request id: 32c7d9e9-8abc-449d-aa45-f5e28bb5a073
Jul  5 09:06:57.585: INFO: Couldn't delete PD "aws://us-east-2a/vol-0fcbdde00b3e6613b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fcbdde00b3e6613b is currently attached to i-0ffd9423c66cf1001
	status code: 400, request id: 8aad22d2-66dc-43ef-bd6a-a1ddc5f3ca7a
Jul  5 09:07:02.785: INFO: Couldn't delete PD "aws://us-east-2a/vol-0fcbdde00b3e6613b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fcbdde00b3e6613b is currently attached to i-0ffd9423c66cf1001
	status code: 400, request id: 0bcfb588-ae76-48a4-9478-0f1feff48629
Jul  5 09:07:08.023: INFO: Couldn't delete PD "aws://us-east-2a/vol-0fcbdde00b3e6613b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fcbdde00b3e6613b is currently attached to i-0ffd9423c66cf1001
	status code: 400, request id: d4afc54f-6949-4369-aacd-fb24af24f2d5
Jul  5 09:07:13.286: INFO: Successfully deleted PD "aws://us-east-2a/vol-0fcbdde00b3e6613b".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:13.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8809" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":4,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:07:12.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  5 09:07:12.442: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-7a8f0db2-b37a-4895-a223-a90480779542" in namespace "security-context-test-8216" to be "Succeeded or Failed"
Jul  5 09:07:12.471: INFO: Pod "busybox-privileged-false-7a8f0db2-b37a-4895-a223-a90480779542": Phase="Pending", Reason="", readiness=false. Elapsed: 28.677276ms
Jul  5 09:07:14.500: INFO: Pod "busybox-privileged-false-7a8f0db2-b37a-4895-a223-a90480779542": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058031175s
Jul  5 09:07:14.500: INFO: Pod "busybox-privileged-false-7a8f0db2-b37a-4895-a223-a90480779542" satisfied condition "Succeeded or Failed"
Jul  5 09:07:14.541: INFO: Got logs for pod "busybox-privileged-false-7a8f0db2-b37a-4895-a223-a90480779542": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:14.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8216" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:07:14.612: INFO: >>> kubeConfig: /root/.kube/config
... skipping 42 lines ...
I0705 09:04:50.575748   12675 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 09:04:53.576157   12675 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 09:04:56.577095   12675 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  5 09:04:56.577: INFO: Creating new exec pod
Jul  5 09:05:01.706: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:05:07.339: INFO: rc: 1
Jul  5 09:05:07.339: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:08.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:05:13.860: INFO: rc: 1
Jul  5 09:05:13.860: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:14.341: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:05:19.879: INFO: rc: 1
Jul  5 09:05:19.879: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:20.341: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:05:25.792: INFO: rc: 1
Jul  5 09:05:25.792: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:26.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:05:31.759: INFO: rc: 1
Jul  5 09:05:31.759: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:32.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:05:37.790: INFO: rc: 1
Jul  5 09:05:37.790: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:38.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:05:43.810: INFO: rc: 1
Jul  5 09:05:43.810: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:44.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:05:49.903: INFO: rc: 1
Jul  5 09:05:49.903: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:50.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:05:55.763: INFO: rc: 1
Jul  5 09:05:55.763: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:56.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:06:01.873: INFO: rc: 1
Jul  5 09:06:01.873: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:02.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:06:07.778: INFO: rc: 1
Jul  5 09:06:07.778: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ + nc -vecho -t -w hostName 2
 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:08.339: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:06:13.768: INFO: rc: 1
Jul  5 09:06:13.768: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:14.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:06:19.816: INFO: rc: 1
Jul  5 09:06:19.816: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:20.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:06:25.770: INFO: rc: 1
Jul  5 09:06:25.771: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:26.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:06:31.767: INFO: rc: 1
Jul  5 09:06:31.768: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:32.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:06:37.766: INFO: rc: 1
Jul  5 09:06:37.766: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:38.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:06:43.755: INFO: rc: 1
Jul  5 09:06:43.755: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:44.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:06:49.757: INFO: rc: 1
Jul  5 09:06:49.757: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:50.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:06:55.785: INFO: rc: 1
Jul  5 09:06:55.785: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:56.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:07:01.758: INFO: rc: 1
Jul  5 09:07:01.758: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:02.340: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:07:07.790: INFO: rc: 1
Jul  5 09:07:07.790: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:07.790: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jul  5 09:07:13.237: INFO: rc: 1
Jul  5 09:07:13.238: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4582 exec execpodkkhzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-update-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:13.238: FAIL: Unexpected error:
    <*errors.errorString | 0xc003bac120>: {
        s: "service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol
occurred

... skipping 291 lines ...
• Failure [149.409 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1229

  Jul  5 09:07:13.238: Unexpected error:
      <*errors.errorString | 0xc003bac120>: {
          s: "service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1263
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":2,"skipped":20,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:16.694: INFO: Only supported for providers [azure] (not aws)
... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:17.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-2691" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":5,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
STEP: Creating a mutating webhook configuration
Jul  5 09:06:32.468: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:06:42.635: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:06:52.732: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:07:02.834: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:07:12.900: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:07:12.900: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000246250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 539 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:07:12.900: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000246250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":8,"skipped":62,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:17.970: INFO: Only supported for providers [azure] (not aws)
... skipping 182 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:431
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":2,"skipped":6,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:07:14.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jul  5 09:07:15.008: INFO: Waiting up to 5m0s for pod "security-context-abc51d77-598e-492b-8733-b4a488e2ffe9" in namespace "security-context-3942" to be "Succeeded or Failed"
Jul  5 09:07:15.036: INFO: Pod "security-context-abc51d77-598e-492b-8733-b4a488e2ffe9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.605912ms
Jul  5 09:07:17.066: INFO: Pod "security-context-abc51d77-598e-492b-8733-b4a488e2ffe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057941444s
Jul  5 09:07:19.095: INFO: Pod "security-context-abc51d77-598e-492b-8733-b4a488e2ffe9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087264658s
Jul  5 09:07:21.125: INFO: Pod "security-context-abc51d77-598e-492b-8733-b4a488e2ffe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.116889629s
STEP: Saw pod success
Jul  5 09:07:21.125: INFO: Pod "security-context-abc51d77-598e-492b-8733-b4a488e2ffe9" satisfied condition "Succeeded or Failed"
Jul  5 09:07:21.153: INFO: Trying to get logs from node ip-172-20-38-136.us-east-2.compute.internal pod security-context-abc51d77-598e-492b-8733-b4a488e2ffe9 container test-container: <nil>
STEP: delete the pod
Jul  5 09:07:21.221: INFO: Waiting for pod security-context-abc51d77-598e-492b-8733-b4a488e2ffe9 to disappear
Jul  5 09:07:21.252: INFO: Pod security-context-abc51d77-598e-492b-8733-b4a488e2ffe9 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.481 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":8,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:21.323: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 127 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:06:39.649: INFO: >>> kubeConfig: /root/.kube/config
... skipping 99 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  5 09:07:20.207: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64f086ff-7b98-4398-9237-142529efb27b" in namespace "projected-8972" to be "Succeeded or Failed"
Jul  5 09:07:20.236: INFO: Pod "downwardapi-volume-64f086ff-7b98-4398-9237-142529efb27b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.563379ms
Jul  5 09:07:22.272: INFO: Pod "downwardapi-volume-64f086ff-7b98-4398-9237-142529efb27b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065409736s
Jul  5 09:07:24.303: INFO: Pod "downwardapi-volume-64f086ff-7b98-4398-9237-142529efb27b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096291521s
STEP: Saw pod success
Jul  5 09:07:24.303: INFO: Pod "downwardapi-volume-64f086ff-7b98-4398-9237-142529efb27b" satisfied condition "Succeeded or Failed"
Jul  5 09:07:24.334: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod downwardapi-volume-64f086ff-7b98-4398-9237-142529efb27b container client-container: <nil>
STEP: delete the pod
Jul  5 09:07:24.401: INFO: Waiting for pod downwardapi-volume-64f086ff-7b98-4398-9237-142529efb27b to disappear
Jul  5 09:07:24.430: INFO: Pod downwardapi-volume-64f086ff-7b98-4398-9237-142529efb27b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:24.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8972" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":7,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:24.517: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 139 lines ...
• [SLOW TEST:15.651 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":10,"skipped":60,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:26.647: INFO: Only supported for providers [openstack] (not aws)
... skipping 140 lines ...
Jul  5 09:06:33.794: INFO: PersistentVolumeClaim csi-hostpath7rrk7 found but phase is Pending instead of Bound.
Jul  5 09:06:35.824: INFO: PersistentVolumeClaim csi-hostpath7rrk7 found but phase is Pending instead of Bound.
Jul  5 09:06:37.854: INFO: PersistentVolumeClaim csi-hostpath7rrk7 found but phase is Pending instead of Bound.
Jul  5 09:06:39.886: INFO: PersistentVolumeClaim csi-hostpath7rrk7 found and phase=Bound (12.213329491s)
STEP: Expanding non-expandable pvc
Jul  5 09:06:39.945: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul  5 09:06:40.007: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:06:42.067: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:06:44.067: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:06:46.068: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:06:48.069: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:06:50.070: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:06:52.068: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:06:54.071: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:06:56.072: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:06:58.071: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:07:00.068: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:07:02.069: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:07:04.069: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:07:06.069: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:07:08.068: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:07:10.069: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:07:10.130: INFO: Error updating pvc csi-hostpath7rrk7: persistentvolumeclaims "csi-hostpath7rrk7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul  5 09:07:10.130: INFO: Deleting PersistentVolumeClaim "csi-hostpath7rrk7"
Jul  5 09:07:10.164: INFO: Waiting up to 5m0s for PersistentVolume pvc-c96af2ef-8edf-4a51-8c5f-e1826d3d062d to get deleted
Jul  5 09:07:10.194: INFO: PersistentVolume pvc-c96af2ef-8edf-4a51-8c5f-e1826d3d062d found and phase=Released (29.756054ms)
Jul  5 09:07:15.224: INFO: PersistentVolume pvc-c96af2ef-8edf-4a51-8c5f-e1826d3d062d was removed
STEP: Deleting sc
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":8,"skipped":79,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:07:24.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-07f0e99f-2734-4f93-944a-e08048ac5b88
STEP: Creating a pod to test consume configMaps
Jul  5 09:07:24.825: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d2ed8e0-1d3e-4100-b3dc-e020cb1d7da0" in namespace "configmap-8340" to be "Succeeded or Failed"
Jul  5 09:07:24.854: INFO: Pod "pod-configmaps-5d2ed8e0-1d3e-4100-b3dc-e020cb1d7da0": Phase="Pending", Reason="", readiness=false. Elapsed: 29.542582ms
Jul  5 09:07:26.884: INFO: Pod "pod-configmaps-5d2ed8e0-1d3e-4100-b3dc-e020cb1d7da0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059170876s
Jul  5 09:07:28.915: INFO: Pod "pod-configmaps-5d2ed8e0-1d3e-4100-b3dc-e020cb1d7da0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090443888s
Jul  5 09:07:30.946: INFO: Pod "pod-configmaps-5d2ed8e0-1d3e-4100-b3dc-e020cb1d7da0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121035667s
STEP: Saw pod success
Jul  5 09:07:30.946: INFO: Pod "pod-configmaps-5d2ed8e0-1d3e-4100-b3dc-e020cb1d7da0" satisfied condition "Succeeded or Failed"
Jul  5 09:07:30.976: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-configmaps-5d2ed8e0-1d3e-4100-b3dc-e020cb1d7da0 container agnhost-container: <nil>
STEP: delete the pod
Jul  5 09:07:31.046: INFO: Waiting for pod pod-configmaps-5d2ed8e0-1d3e-4100-b3dc-e020cb1d7da0 to disappear
Jul  5 09:07:31.076: INFO: Pod pod-configmaps-5d2ed8e0-1d3e-4100-b3dc-e020cb1d7da0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.548 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:31.178: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 34 lines ...
Jul  5 09:07:25.574: INFO: Got stdout from 52.15.167.15:22: Hello from ubuntu@ip-172-20-57-184
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Jul  5 09:07:26.533: INFO: Got stdout from 3.138.113.49:22: stdout
Jul  5 09:07:26.534: INFO: Got stderr from 3.138.113.49:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing ubuntu@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:31.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-3825" for this suite.


... skipping 9 lines ...
Jul  5 09:07:23.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul  5 09:07:23.760: INFO: Waiting up to 5m0s for pod "security-context-efc7a882-356b-4363-bedc-bf6fc97aac30" in namespace "security-context-18" to be "Succeeded or Failed"
Jul  5 09:07:23.791: INFO: Pod "security-context-efc7a882-356b-4363-bedc-bf6fc97aac30": Phase="Pending", Reason="", readiness=false. Elapsed: 31.533598ms
Jul  5 09:07:25.822: INFO: Pod "security-context-efc7a882-356b-4363-bedc-bf6fc97aac30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062081873s
Jul  5 09:07:27.852: INFO: Pod "security-context-efc7a882-356b-4363-bedc-bf6fc97aac30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092260875s
Jul  5 09:07:29.882: INFO: Pod "security-context-efc7a882-356b-4363-bedc-bf6fc97aac30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12258429s
Jul  5 09:07:31.914: INFO: Pod "security-context-efc7a882-356b-4363-bedc-bf6fc97aac30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.154301803s
STEP: Saw pod success
Jul  5 09:07:31.914: INFO: Pod "security-context-efc7a882-356b-4363-bedc-bf6fc97aac30" satisfied condition "Succeeded or Failed"
Jul  5 09:07:31.945: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod security-context-efc7a882-356b-4363-bedc-bf6fc97aac30 container test-container: <nil>
STEP: delete the pod
Jul  5 09:07:32.015: INFO: Waiting for pod security-context-efc7a882-356b-4363-bedc-bf6fc97aac30 to disappear
Jul  5 09:07:32.045: INFO: Pod security-context-efc7a882-356b-4363-bedc-bf6fc97aac30 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.535 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":13,"skipped":50,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:32.161: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 31 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Jul  5 09:07:29.094: INFO: Waiting up to 5m0s for pod "metadata-volume-21d4a0de-9334-4b3d-8573-692985f5c1df" in namespace "downward-api-5828" to be "Succeeded or Failed"
Jul  5 09:07:29.124: INFO: Pod "metadata-volume-21d4a0de-9334-4b3d-8573-692985f5c1df": Phase="Pending", Reason="", readiness=false. Elapsed: 29.751018ms
Jul  5 09:07:31.158: INFO: Pod "metadata-volume-21d4a0de-9334-4b3d-8573-692985f5c1df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064110062s
Jul  5 09:07:33.189: INFO: Pod "metadata-volume-21d4a0de-9334-4b3d-8573-692985f5c1df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095380242s
STEP: Saw pod success
Jul  5 09:07:33.190: INFO: Pod "metadata-volume-21d4a0de-9334-4b3d-8573-692985f5c1df" satisfied condition "Succeeded or Failed"
Jul  5 09:07:33.219: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod metadata-volume-21d4a0de-9334-4b3d-8573-692985f5c1df container client-container: <nil>
STEP: delete the pod
Jul  5 09:07:33.300: INFO: Waiting for pod metadata-volume-21d4a0de-9334-4b3d-8573-692985f5c1df to disappear
Jul  5 09:07:33.342: INFO: Pod metadata-volume-21d4a0de-9334-4b3d-8573-692985f5c1df no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:33.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5828" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":81,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:33.448: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":6,"skipped":9,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:07:31.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:255
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:33.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5864" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":7,"skipped":9,"failed":0}
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:07:33.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:34.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2120" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":8,"skipped":9,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:34.321: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 20 lines ...
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:07:34.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name secret-emptykey-test-9e07e2a1-d326-4998-baa7-be0ae92e942e
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:34.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9650" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":9,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:34.604: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6300-crds.webhook.example.com via the AdmissionRegistration API
Jul  5 09:06:52.801: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:07:02.962: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:07:13.062: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:07:23.175: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:07:33.239: INFO: Waiting for webhook configuration to be ready...
Jul  5 09:07:33.240: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0002b8240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 501 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:07:33.240: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0002b8240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

... skipping 21 lines ...
• [SLOW TEST:19.738 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":9,"skipped":73,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:41.101: INFO: Only supported for providers [openstack] (not aws)
... skipping 23 lines ...
Jul  5 09:06:20.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  5 09:06:20.215: INFO: created pod
Jul  5 09:06:20.215: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-2152" to be "Succeeded or Failed"
Jul  5 09:06:20.245: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 30.121432ms
Jul  5 09:06:22.278: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 2.062907666s
Jul  5 09:06:24.309: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 4.094237116s
Jul  5 09:06:26.340: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 6.124698938s
Jul  5 09:06:28.371: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 8.156303242s
Jul  5 09:06:30.404: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 10.188797686s
... skipping 16 lines ...
Jul  5 09:07:04.939: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 44.723674323s
Jul  5 09:07:06.970: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 46.755159732s
Jul  5 09:07:09.002: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 48.786515951s
Jul  5 09:07:11.032: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 50.817195839s
Jul  5 09:07:13.064: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 52.849308498s
STEP: Saw pod success
Jul  5 09:07:13.064: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
Jul  5 09:07:43.066: INFO: polling logs
Jul  5 09:07:43.099: INFO: Pod logs: 
2021/07/05 09:06:20 OK: Got token
2021/07/05 09:06:20 validating with in-cluster discovery
2021/07/05 09:06:20 OK: got issuer https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery
2021/07/05 09:06:20 Full, not-validated claims: 
openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery", Subject:"system:serviceaccount:svcaccounts-2152:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1625476580, NotBefore:1625475980, IssuedAt:1625475980, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-2152", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"c3a91037-f7e4-4242-ad57-fd06f948e62a"}}}
2021/07/05 09:06:46 failed to validate with in-cluster discovery: Get "https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery/.well-known/openid-configuration": x509: certificate signed by unknown authority
2021/07/05 09:06:46 falling back to validating with external discovery
2021/07/05 09:06:46 OK: got issuer https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery
2021/07/05 09:06:46 Full, not-validated claims: 
openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery", Subject:"system:serviceaccount:svcaccounts-2152:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1625476580, NotBefore:1625475980, IssuedAt:1625475980, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-2152", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"c3a91037-f7e4-4242-ad57-fd06f948e62a"}}}
2021/07/05 09:07:11 OK: Constructed OIDC provider for issuer https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery
2021/07/05 09:07:11 OK: Validated signature on JWT
... skipping 11 lines ...
• [SLOW TEST:83.193 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:43.210: INFO: Only supported for providers [openstack] (not aws)
... skipping 184 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":5,"skipped":27,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:07:37.466: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jul  5 09:07:37.619: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  5 09:07:37.619: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-vjkx
STEP: Creating a pod to test subpath
Jul  5 09:07:37.677: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-vjkx" in namespace "provisioning-8432" to be "Succeeded or Failed"
Jul  5 09:07:37.727: INFO: Pod "pod-subpath-test-inlinevolume-vjkx": Phase="Pending", Reason="", readiness=false. Elapsed: 50.009006ms
Jul  5 09:07:39.757: INFO: Pod "pod-subpath-test-inlinevolume-vjkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079828457s
Jul  5 09:07:41.786: INFO: Pod "pod-subpath-test-inlinevolume-vjkx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109175618s
Jul  5 09:07:43.821: INFO: Pod "pod-subpath-test-inlinevolume-vjkx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14409243s
STEP: Saw pod success
Jul  5 09:07:43.821: INFO: Pod "pod-subpath-test-inlinevolume-vjkx" satisfied condition "Succeeded or Failed"
Jul  5 09:07:43.856: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-subpath-test-inlinevolume-vjkx container test-container-subpath-inlinevolume-vjkx: <nil>
STEP: delete the pod
Jul  5 09:07:43.922: INFO: Waiting for pod pod-subpath-test-inlinevolume-vjkx to disappear
Jul  5 09:07:43.951: INFO: Pod pod-subpath-test-inlinevolume-vjkx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-vjkx
Jul  5 09:07:43.951: INFO: Deleting pod "pod-subpath-test-inlinevolume-vjkx" in namespace "provisioning-8432"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":27,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:07:23.865: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Jul  5 09:07:38.697: INFO: PersistentVolumeClaim pvc-zm9vv found but phase is Pending instead of Bound.
Jul  5 09:07:40.729: INFO: PersistentVolumeClaim pvc-zm9vv found and phase=Bound (12.235657132s)
Jul  5 09:07:40.729: INFO: Waiting up to 3m0s for PersistentVolume local-w4f5x to have phase Bound
Jul  5 09:07:40.760: INFO: PersistentVolume local-w4f5x found and phase=Bound (30.674833ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qv95
STEP: Creating a pod to test subpath
Jul  5 09:07:40.855: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qv95" in namespace "provisioning-2139" to be "Succeeded or Failed"
Jul  5 09:07:40.886: INFO: Pod "pod-subpath-test-preprovisionedpv-qv95": Phase="Pending", Reason="", readiness=false. Elapsed: 30.836036ms
Jul  5 09:07:42.918: INFO: Pod "pod-subpath-test-preprovisionedpv-qv95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063268319s
Jul  5 09:07:44.950: INFO: Pod "pod-subpath-test-preprovisionedpv-qv95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094774837s
STEP: Saw pod success
Jul  5 09:07:44.950: INFO: Pod "pod-subpath-test-preprovisionedpv-qv95" satisfied condition "Succeeded or Failed"
Jul  5 09:07:44.982: INFO: Trying to get logs from node ip-172-20-38-136.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-qv95 container test-container-volume-preprovisionedpv-qv95: <nil>
STEP: delete the pod
Jul  5 09:07:45.052: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qv95 to disappear
Jul  5 09:07:45.083: INFO: Pod pod-subpath-test-preprovisionedpv-qv95 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qv95
Jul  5 09:07:45.083: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qv95" in namespace "provisioning-2139"
... skipping 28 lines ...
Jul  5 09:07:41.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Jul  5 09:07:41.288: INFO: Waiting up to 5m0s for pod "security-context-20c98a4d-0cc8-40a3-b6fc-52ec8f506124" in namespace "security-context-999" to be "Succeeded or Failed"
Jul  5 09:07:41.317: INFO: Pod "security-context-20c98a4d-0cc8-40a3-b6fc-52ec8f506124": Phase="Pending", Reason="", readiness=false. Elapsed: 28.605713ms
Jul  5 09:07:43.346: INFO: Pod "security-context-20c98a4d-0cc8-40a3-b6fc-52ec8f506124": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057971457s
Jul  5 09:07:45.375: INFO: Pod "security-context-20c98a4d-0cc8-40a3-b6fc-52ec8f506124": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086955292s
Jul  5 09:07:47.404: INFO: Pod "security-context-20c98a4d-0cc8-40a3-b6fc-52ec8f506124": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115934564s
STEP: Saw pod success
Jul  5 09:07:47.404: INFO: Pod "security-context-20c98a4d-0cc8-40a3-b6fc-52ec8f506124" satisfied condition "Succeeded or Failed"
Jul  5 09:07:47.433: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod security-context-20c98a4d-0cc8-40a3-b6fc-52ec8f506124 container test-container: <nil>
STEP: delete the pod
Jul  5 09:07:47.496: INFO: Waiting for pod security-context-20c98a4d-0cc8-40a3-b6fc-52ec8f506124 to disappear
Jul  5 09:07:47.525: INFO: Pod security-context-20c98a4d-0cc8-40a3-b6fc-52ec8f506124 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.471 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":10,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:47.595: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 136 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:48.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-6217" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":11,"skipped":99,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:48.521: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 157 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container Status
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:203
    should never report success for a pending container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:209
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":4,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
I0705 09:04:54.111383   12612 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 09:04:57.111727   12612 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jul  5 09:04:57.210: INFO: Creating new exec pod
Jul  5 09:05:01.302: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1917 exec execpodvnpbm -- /bin/sh -x -c nslookup clusterip-service.services-1917.svc.cluster.local'
Jul  5 09:05:16.837: INFO: rc: 1
Jul  5 09:05:16.837: INFO: ExternalName service "services-1917/execpodvnpbm" failed to resolve to IP
Jul  5 09:05:18.839: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1917 exec execpodvnpbm -- /bin/sh -x -c nslookup clusterip-service.services-1917.svc.cluster.local'
Jul  5 09:05:34.308: INFO: rc: 1
Jul  5 09:05:34.308: INFO: ExternalName service "services-1917/execpodvnpbm" failed to resolve to IP
Jul  5 09:05:34.838: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1917 exec execpodvnpbm -- /bin/sh -x -c nslookup clusterip-service.services-1917.svc.cluster.local'
Jul  5 09:05:50.299: INFO: rc: 1
Jul  5 09:05:50.299: INFO: ExternalName service "services-1917/execpodvnpbm" failed to resolve to IP
Jul  5 09:05:50.838: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1917 exec execpodvnpbm -- /bin/sh -x -c nslookup clusterip-service.services-1917.svc.cluster.local'
Jul  5 09:06:06.352: INFO: rc: 1
Jul  5 09:06:06.352: INFO: ExternalName service "services-1917/execpodvnpbm" failed to resolve to IP
Jul  5 09:06:06.838: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1917 exec execpodvnpbm -- /bin/sh -x -c nslookup clusterip-service.services-1917.svc.cluster.local'
Jul  5 09:06:22.294: INFO: rc: 1
Jul  5 09:06:22.294: INFO: ExternalName service "services-1917/execpodvnpbm" failed to resolve to IP
Jul  5 09:06:22.838: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1917 exec execpodvnpbm -- /bin/sh -x -c nslookup clusterip-service.services-1917.svc.cluster.local'
Jul  5 09:06:38.270: INFO: rc: 1
Jul  5 09:06:38.270: INFO: ExternalName service "services-1917/execpodvnpbm" failed to resolve to IP
Jul  5 09:06:38.838: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1917 exec execpodvnpbm -- /bin/sh -x -c nslookup clusterip-service.services-1917.svc.cluster.local'
Jul  5 09:06:54.279: INFO: rc: 1
Jul  5 09:06:54.279: INFO: ExternalName service "services-1917/execpodvnpbm" failed to resolve to IP
Jul  5 09:06:54.838: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1917 exec execpodvnpbm -- /bin/sh -x -c nslookup clusterip-service.services-1917.svc.cluster.local'
Jul  5 09:07:10.301: INFO: rc: 1
Jul  5 09:07:10.301: INFO: ExternalName service "services-1917/execpodvnpbm" failed to resolve to IP
Jul  5 09:07:10.838: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1917 exec execpodvnpbm -- /bin/sh -x -c nslookup clusterip-service.services-1917.svc.cluster.local'
Jul  5 09:07:26.334: INFO: rc: 1
Jul  5 09:07:26.334: INFO: ExternalName service "services-1917/execpodvnpbm" failed to resolve to IP
Jul  5 09:07:26.334: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1917 exec execpodvnpbm -- /bin/sh -x -c nslookup clusterip-service.services-1917.svc.cluster.local'
Jul  5 09:07:41.791: INFO: rc: 1
Jul  5 09:07:41.791: INFO: ExternalName service "services-1917/execpodvnpbm" failed to resolve to IP
Jul  5 09:07:41.792: FAIL: Unexpected error:
    <*errors.errorString | 0xc0002b8250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 293 lines ...
• Failure [185.072 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:07:41.792: Unexpected error:
      <*errors.errorString | 0xc0002b8250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1411
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":2,"skipped":14,"failed":1,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:55.892: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:07:56.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3770" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":3,"skipped":23,"failed":1,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:07:56.860: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 13 lines ...
STEP: creating replication controller nodeport-test in namespace services-9954
I0705 09:05:11.805247   12616 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9954, replica count: 2
I0705 09:05:14.856355   12616 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  5 09:05:14.856: INFO: Creating new exec pod
Jul  5 09:05:17.983: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  5 09:05:23.528: INFO: rc: 1
Jul  5 09:05:23.528: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ + nc -vecho -t hostName -w
 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:24.528: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  5 09:05:29.954: INFO: rc: 1
Jul  5 09:05:29.954: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:30.528: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  5 09:05:33.002: INFO: rc: 1
Jul  5 09:05:33.002: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 nodeport-test 80
+ echo hostName
nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:33.529: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  5 09:05:38.985: INFO: rc: 1
Jul  5 09:05:38.986: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:39.528: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  5 09:05:39.960: INFO: stderr: "+ + nc -vecho -t hostName -w\n 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Jul  5 09:05:39.960: INFO: stdout: ""
Jul  5 09:05:40.528: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  5 09:05:45.979: INFO: rc: 1
Jul  5 09:05:45.979: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:46.529: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  5 09:05:46.961: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Jul  5 09:05:46.961: INFO: stdout: "nodeport-test-7jn87"
Jul  5 09:05:46.961: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.148.113 80'
Jul  5 09:05:49.347: INFO: rc: 1
Jul  5 09:05:49.347: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.148.113 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.71.148.113 80
nc: connect to 100.71.148.113 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:50.348: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.148.113 80'
Jul  5 09:05:52.756: INFO: rc: 1
Jul  5 09:05:52.756: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.148.113 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.71.148.113 80
nc: connect to 100.71.148.113 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:53.348: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.148.113 80'
Jul  5 09:05:53.789: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.148.113 80\nConnection to 100.71.148.113 80 port [tcp/http] succeeded!\n"
Jul  5 09:05:53.789: INFO: stdout: "nodeport-test-7jn87"
Jul  5 09:05:53.789: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:05:56.213: INFO: rc: 1
Jul  5 09:05:56.213: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:05:57.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:05:59.639: INFO: rc: 1
Jul  5 09:05:59.640: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:00.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:02.649: INFO: rc: 1
Jul  5 09:06:02.649: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:03.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:05.737: INFO: rc: 1
Jul  5 09:06:05.737: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:06.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:08.697: INFO: rc: 1
Jul  5 09:06:08.697: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:09.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:11.644: INFO: rc: 1
Jul  5 09:06:11.644: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:12.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:14.654: INFO: rc: 1
Jul  5 09:06:14.654: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:15.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:17.690: INFO: rc: 1
Jul  5 09:06:17.690: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:18.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:20.637: INFO: rc: 1
Jul  5 09:06:20.637: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:21.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:23.641: INFO: rc: 1
Jul  5 09:06:23.641: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:24.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:26.661: INFO: rc: 1
Jul  5 09:06:26.661: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:27.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:29.629: INFO: rc: 1
Jul  5 09:06:29.629: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:30.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:32.633: INFO: rc: 1
Jul  5 09:06:32.633: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:33.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:35.640: INFO: rc: 1
Jul  5 09:06:35.640: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:36.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:38.746: INFO: rc: 1
Jul  5 09:06:38.746: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:39.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:41.649: INFO: rc: 1
Jul  5 09:06:41.649: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:42.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:44.693: INFO: rc: 1
Jul  5 09:06:44.693: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:45.215: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:47.640: INFO: rc: 1
Jul  5 09:06:47.640: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.38.136 31275
+ echo hostName
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:48.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:50.665: INFO: rc: 1
Jul  5 09:06:50.665: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:51.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:53.633: INFO: rc: 1
Jul  5 09:06:53.633: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:54.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:56.725: INFO: rc: 1
Jul  5 09:06:56.726: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.38.136 31275
+ echo hostName
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:06:57.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:06:59.649: INFO: rc: 1
Jul  5 09:06:59.649: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:00.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:02.673: INFO: rc: 1
Jul  5 09:07:02.673: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:03.215: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:05.687: INFO: rc: 1
Jul  5 09:07:05.687: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:06.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:08.641: INFO: rc: 1
Jul  5 09:07:08.641: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ + echonc -v -t -w 2 172.20.38.136 31275
 hostName
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:09.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:11.616: INFO: rc: 1
Jul  5 09:07:11.616: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:12.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:14.646: INFO: rc: 1
Jul  5 09:07:14.646: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:15.215: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:17.647: INFO: rc: 1
Jul  5 09:07:17.647: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.38.136 31275
+ echo hostName
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:18.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:20.673: INFO: rc: 1
Jul  5 09:07:20.674: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:21.215: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:23.688: INFO: rc: 1
Jul  5 09:07:23.689: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ + nc -vecho -t hostName -w
 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:24.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:26.675: INFO: rc: 1
Jul  5 09:07:26.675: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:27.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:29.664: INFO: rc: 1
Jul  5 09:07:29.664: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:30.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:32.696: INFO: rc: 1
Jul  5 09:07:32.697: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:33.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:35.745: INFO: rc: 1
Jul  5 09:07:35.746: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:36.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:38.632: INFO: rc: 1
Jul  5 09:07:38.632: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:39.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:41.646: INFO: rc: 1
Jul  5 09:07:41.646: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:42.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:44.670: INFO: rc: 1
Jul  5 09:07:44.670: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.38.136 31275
+ echo hostName
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:45.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:47.642: INFO: rc: 1
Jul  5 09:07:47.642: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:48.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:50.689: INFO: rc: 1
Jul  5 09:07:50.689: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.38.136 31275
+ echo hostName
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:51.215: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:53.647: INFO: rc: 1
Jul  5 09:07:53.647: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:54.214: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:56.650: INFO: rc: 1
Jul  5 09:07:56.650: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:56.650: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275'
Jul  5 09:07:59.073: INFO: rc: 1
Jul  5 09:07:59.073: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9954 exec execpod2jzdg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.38.136 31275:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.38.136 31275
nc: connect to 172.20.38.136 port 31275 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:59.074: FAIL: Unexpected error:
    <*errors.errorString | 0xc001792030>: {
        s: "service is not reachable within 2m0s timeout on endpoint 172.20.38.136:31275 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 172.20.38.136:31275 over TCP protocol
occurred

... skipping 297 lines ...
• Failure [169.430 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:07:59.074: Unexpected error:
      <*errors.errorString | 0xc001792030>: {
          s: "service is not reachable within 2m0s timeout on endpoint 172.20.38.136:31275 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 172.20.38.136:31275 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1187
------------------------------
{"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":1,"skipped":18,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:01.038: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Jul  5 09:07:44.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
Jul  5 09:07:44.281: INFO: Waiting up to 5m0s for pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37" in namespace "svcaccounts-3146" to be "Succeeded or Failed"
Jul  5 09:07:44.310: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Pending", Reason="", readiness=false. Elapsed: 28.89771ms
Jul  5 09:07:46.341: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060535674s
Jul  5 09:07:48.372: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090718588s
Jul  5 09:07:50.401: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120011898s
Jul  5 09:07:52.431: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.150600278s
STEP: Saw pod success
Jul  5 09:07:52.432: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37" satisfied condition "Succeeded or Failed"
Jul  5 09:07:52.461: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 container agnhost-container: <nil>
STEP: delete the pod
Jul  5 09:07:52.526: INFO: Waiting for pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 to disappear
Jul  5 09:07:52.554: INFO: Pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 no longer exists
STEP: Creating a pod to test service account token: 
Jul  5 09:07:52.586: INFO: Waiting up to 5m0s for pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37" in namespace "svcaccounts-3146" to be "Succeeded or Failed"
Jul  5 09:07:52.615: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Pending", Reason="", readiness=false. Elapsed: 28.86885ms
Jul  5 09:07:54.645: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059060687s
Jul  5 09:07:56.679: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092515848s
STEP: Saw pod success
Jul  5 09:07:56.679: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37" satisfied condition "Succeeded or Failed"
Jul  5 09:07:56.708: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 container agnhost-container: <nil>
STEP: delete the pod
Jul  5 09:07:56.776: INFO: Waiting for pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 to disappear
Jul  5 09:07:56.805: INFO: Pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 no longer exists
STEP: Creating a pod to test service account token: 
Jul  5 09:07:56.835: INFO: Waiting up to 5m0s for pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37" in namespace "svcaccounts-3146" to be "Succeeded or Failed"
Jul  5 09:07:56.864: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Pending", Reason="", readiness=false. Elapsed: 29.110873ms
Jul  5 09:07:58.896: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06056973s
Jul  5 09:08:00.926: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090700929s
STEP: Saw pod success
Jul  5 09:08:00.926: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37" satisfied condition "Succeeded or Failed"
Jul  5 09:08:00.955: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 container agnhost-container: <nil>
STEP: delete the pod
Jul  5 09:08:01.025: INFO: Waiting for pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 to disappear
Jul  5 09:08:01.053: INFO: Pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 no longer exists
STEP: Creating a pod to test service account token: 
Jul  5 09:08:01.084: INFO: Waiting up to 5m0s for pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37" in namespace "svcaccounts-3146" to be "Succeeded or Failed"
Jul  5 09:08:01.112: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Pending", Reason="", readiness=false. Elapsed: 28.709966ms
Jul  5 09:08:03.143: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058982465s
STEP: Saw pod success
Jul  5 09:08:03.143: INFO: Pod "test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37" satisfied condition "Succeeded or Failed"
Jul  5 09:08:03.172: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 container agnhost-container: <nil>
STEP: delete the pod
Jul  5 09:08:03.236: INFO: Waiting for pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 to disappear
Jul  5 09:08:03.265: INFO: Pod test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:19.223 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":32,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:08:04.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-146" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":8,"skipped":35,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:05.029: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 5 lines ...
Jul  5 09:08:05.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul  5 09:08:05.227: INFO: Waiting up to 5m0s for pod "security-context-e8c18a5f-548b-47ed-aa1f-4e9abeb37ddb" in namespace "security-context-8654" to be "Succeeded or Failed"
Jul  5 09:08:05.255: INFO: Pod "security-context-e8c18a5f-548b-47ed-aa1f-4e9abeb37ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 28.577606ms
Jul  5 09:08:07.285: INFO: Pod "security-context-e8c18a5f-548b-47ed-aa1f-4e9abeb37ddb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057972119s
STEP: Saw pod success
Jul  5 09:08:07.285: INFO: Pod "security-context-e8c18a5f-548b-47ed-aa1f-4e9abeb37ddb" satisfied condition "Succeeded or Failed"
Jul  5 09:08:07.314: INFO: Trying to get logs from node ip-172-20-38-136.us-east-2.compute.internal pod security-context-e8c18a5f-548b-47ed-aa1f-4e9abeb37ddb container test-container: <nil>
STEP: delete the pod
Jul  5 09:08:07.385: INFO: Waiting for pod security-context-e8c18a5f-548b-47ed-aa1f-4e9abeb37ddb to disappear
Jul  5 09:08:07.413: INFO: Pod security-context-e8c18a5f-548b-47ed-aa1f-4e9abeb37ddb no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:08:07.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-8654" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":9,"skipped":42,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:07.496: INFO: Only supported for providers [vsphere] (not aws)
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:08:09.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1812" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":47,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:09.880: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 33 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7797
STEP: Creating statefulset with conflicting port in namespace statefulset-7797
STEP: Waiting until pod test-pod will start running in namespace statefulset-7797
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7797
Jul  5 09:08:14.299: INFO: Observed stateful pod in namespace: statefulset-7797, name: ss-0, uid: 8ec4e03e-122b-4721-9623-e4c630dad796, status phase: Pending. Waiting for statefulset controller to delete.
Jul  5 09:08:15.283: INFO: Observed stateful pod in namespace: statefulset-7797, name: ss-0, uid: 8ec4e03e-122b-4721-9623-e4c630dad796, status phase: Failed. Waiting for statefulset controller to delete.
Jul  5 09:08:15.290: INFO: Observed stateful pod in namespace: statefulset-7797, name: ss-0, uid: 8ec4e03e-122b-4721-9623-e4c630dad796, status phase: Failed. Waiting for statefulset controller to delete.
Jul  5 09:08:15.292: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7797
STEP: Removing pod with conflicting port in namespace statefulset-7797
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7797 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
Jul  5 09:08:21.443: INFO: Deleting all statefulset in ns statefulset-7797
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":11,"skipped":61,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:31.790: INFO: Only supported for providers [gce gke] (not aws)
... skipping 311 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":10,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
Jul  5 09:08:32.523: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.215 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:08:32.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-807" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":12,"skipped":87,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:32.920: INFO: Only supported for providers [azure] (not aws)
... skipping 168 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity disabled
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":4,"skipped":44,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:08:34.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8930" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":13,"skipped":96,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:34.714: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Jul  5 09:08:34.783: INFO: Waiting up to 5m0s for pod "metadata-volume-c93fea5f-682d-4583-b8b3-f7721e12cb17" in namespace "projected-6641" to be "Succeeded or Failed"
Jul  5 09:08:34.813: INFO: Pod "metadata-volume-c93fea5f-682d-4583-b8b3-f7721e12cb17": Phase="Pending", Reason="", readiness=false. Elapsed: 29.90053ms
Jul  5 09:08:36.844: INFO: Pod "metadata-volume-c93fea5f-682d-4583-b8b3-f7721e12cb17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060839243s
STEP: Saw pod success
Jul  5 09:08:36.844: INFO: Pod "metadata-volume-c93fea5f-682d-4583-b8b3-f7721e12cb17" satisfied condition "Succeeded or Failed"
Jul  5 09:08:36.875: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod metadata-volume-c93fea5f-682d-4583-b8b3-f7721e12cb17 container client-container: <nil>
STEP: delete the pod
Jul  5 09:08:36.948: INFO: Waiting for pod metadata-volume-c93fea5f-682d-4583-b8b3-f7721e12cb17 to disappear
Jul  5 09:08:36.978: INFO: Pod metadata-volume-c93fea5f-682d-4583-b8b3-f7721e12cb17 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:08:36.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6641" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":49,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:37.062: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 192 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":11,"skipped":80,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:37.253: INFO: Driver emptydir doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 47 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:08:37.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":12,"skipped":85,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:37.398: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:08:39.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8375" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:08:37.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-6b06468f-4103-44fb-84aa-20e256864258
STEP: Creating a pod to test consume secrets
Jul  5 09:08:37.627: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-830067a5-477f-4d99-b04d-5443c593105b" in namespace "projected-938" to be "Succeeded or Failed"
Jul  5 09:08:37.657: INFO: Pod "pod-projected-secrets-830067a5-477f-4d99-b04d-5443c593105b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.712472ms
Jul  5 09:08:39.692: INFO: Pod "pod-projected-secrets-830067a5-477f-4d99-b04d-5443c593105b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064444515s
STEP: Saw pod success
Jul  5 09:08:39.692: INFO: Pod "pod-projected-secrets-830067a5-477f-4d99-b04d-5443c593105b" satisfied condition "Succeeded or Failed"
Jul  5 09:08:39.724: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod pod-projected-secrets-830067a5-477f-4d99-b04d-5443c593105b container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul  5 09:08:39.797: INFO: Waiting for pod pod-projected-secrets-830067a5-477f-4d99-b04d-5443c593105b to disappear
Jul  5 09:08:39.826: INFO: Pod pod-projected-secrets-830067a5-477f-4d99-b04d-5443c593105b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:08:39.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-938" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":88,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:39.913: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 81 lines ...
• [SLOW TEST:7.707 seconds]
[sig-scheduling] LimitRange
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":14,"skipped":98,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:42.454: INFO: Driver "local" does not provide raw block - skipping
... skipping 88 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":7,"skipped":58,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:48.659: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:08:52.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2197" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":8,"skipped":65,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:08:52.930: INFO: Only supported for providers [gce gke] (not aws)
... skipping 42 lines ...
Jul  5 09:08:52.729: INFO: PersistentVolumeClaim pvc-kx2zx found but phase is Pending instead of Bound.
Jul  5 09:08:54.761: INFO: PersistentVolumeClaim pvc-kx2zx found and phase=Bound (12.223743321s)
Jul  5 09:08:54.761: INFO: Waiting up to 3m0s for PersistentVolume local-glbkx to have phase Bound
Jul  5 09:08:54.791: INFO: PersistentVolume local-glbkx found and phase=Bound (30.007781ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9qgz
STEP: Creating a pod to test subpath
Jul  5 09:08:54.882: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9qgz" in namespace "provisioning-2149" to be "Succeeded or Failed"
Jul  5 09:08:54.912: INFO: Pod "pod-subpath-test-preprovisionedpv-9qgz": Phase="Pending", Reason="", readiness=false. Elapsed: 29.841274ms
Jul  5 09:08:56.942: INFO: Pod "pod-subpath-test-preprovisionedpv-9qgz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060138988s
Jul  5 09:08:58.974: INFO: Pod "pod-subpath-test-preprovisionedpv-9qgz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091492137s
STEP: Saw pod success
Jul  5 09:08:58.974: INFO: Pod "pod-subpath-test-preprovisionedpv-9qgz" satisfied condition "Succeeded or Failed"
Jul  5 09:08:59.003: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-9qgz container test-container-subpath-preprovisionedpv-9qgz: <nil>
STEP: delete the pod
Jul  5 09:08:59.071: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9qgz to disappear
Jul  5 09:08:59.101: INFO: Pod pod-subpath-test-preprovisionedpv-9qgz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9qgz
Jul  5 09:08:59.101: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9qgz" in namespace "provisioning-2149"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":14,"skipped":95,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:246.050 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted by liveness probe because startup probe delays it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:348
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":4,"skipped":57,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:10.269: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 49 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-a576d7de-72a9-4f30-b59a-61e4b6dfdc52
STEP: Creating a pod to test consume configMaps
Jul  5 09:09:10.717: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9ea9350c-1283-4ca3-b25e-174a2993cf7b" in namespace "projected-9304" to be "Succeeded or Failed"
Jul  5 09:09:10.746: INFO: Pod "pod-projected-configmaps-9ea9350c-1283-4ca3-b25e-174a2993cf7b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.975656ms
Jul  5 09:09:12.775: INFO: Pod "pod-projected-configmaps-9ea9350c-1283-4ca3-b25e-174a2993cf7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058580282s
STEP: Saw pod success
Jul  5 09:09:12.775: INFO: Pod "pod-projected-configmaps-9ea9350c-1283-4ca3-b25e-174a2993cf7b" satisfied condition "Succeeded or Failed"
Jul  5 09:09:12.804: INFO: Trying to get logs from node ip-172-20-38-136.us-east-2.compute.internal pod pod-projected-configmaps-9ea9350c-1283-4ca3-b25e-174a2993cf7b container agnhost-container: <nil>
STEP: delete the pod
Jul  5 09:09:12.874: INFO: Waiting for pod pod-projected-configmaps-9ea9350c-1283-4ca3-b25e-174a2993cf7b to disappear
Jul  5 09:09:12.903: INFO: Pod pod-projected-configmaps-9ea9350c-1283-4ca3-b25e-174a2993cf7b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:09:12.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9304" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":68,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 26 lines ...
Jul  5 09:09:08.624: INFO: PersistentVolumeClaim pvc-wklf7 found but phase is Pending instead of Bound.
Jul  5 09:09:10.655: INFO: PersistentVolumeClaim pvc-wklf7 found and phase=Bound (14.2478717s)
Jul  5 09:09:10.655: INFO: Waiting up to 3m0s for PersistentVolume local-kgw48 to have phase Bound
Jul  5 09:09:10.685: INFO: PersistentVolume local-kgw48 found and phase=Bound (30.197796ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tzvl
STEP: Creating a pod to test subpath
Jul  5 09:09:10.778: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tzvl" in namespace "provisioning-8304" to be "Succeeded or Failed"
Jul  5 09:09:10.811: INFO: Pod "pod-subpath-test-preprovisionedpv-tzvl": Phase="Pending", Reason="", readiness=false. Elapsed: 32.869678ms
Jul  5 09:09:12.841: INFO: Pod "pod-subpath-test-preprovisionedpv-tzvl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063173519s
Jul  5 09:09:14.873: INFO: Pod "pod-subpath-test-preprovisionedpv-tzvl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094898889s
STEP: Saw pod success
Jul  5 09:09:14.873: INFO: Pod "pod-subpath-test-preprovisionedpv-tzvl" satisfied condition "Succeeded or Failed"
Jul  5 09:09:14.903: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-tzvl container test-container-subpath-preprovisionedpv-tzvl: <nil>
STEP: delete the pod
Jul  5 09:09:14.985: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tzvl to disappear
Jul  5 09:09:15.015: INFO: Pod pod-subpath-test-preprovisionedpv-tzvl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tzvl
Jul  5 09:09:15.015: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tzvl" in namespace "provisioning-8304"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:16.425: INFO: Only supported for providers [gce gke] (not aws)
... skipping 85 lines ...
Jul  5 09:08:06.841: INFO: PersistentVolumeClaim csi-hostpathg759p found but phase is Pending instead of Bound.
Jul  5 09:08:08.872: INFO: PersistentVolumeClaim csi-hostpathg759p found but phase is Pending instead of Bound.
Jul  5 09:08:10.909: INFO: PersistentVolumeClaim csi-hostpathg759p found but phase is Pending instead of Bound.
Jul  5 09:08:12.939: INFO: PersistentVolumeClaim csi-hostpathg759p found and phase=Bound (10.189873054s)
STEP: Expanding non-expandable pvc
Jul  5 09:08:12.999: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul  5 09:08:13.062: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:15.124: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:17.123: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:19.123: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:21.123: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:23.124: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:25.125: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:27.129: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:29.123: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:31.123: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:33.127: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:35.124: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:37.127: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:39.123: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:41.124: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:43.127: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 09:08:43.187: INFO: Error updating pvc csi-hostpathg759p: persistentvolumeclaims "csi-hostpathg759p" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul  5 09:08:43.187: INFO: Deleting PersistentVolumeClaim "csi-hostpathg759p"
Jul  5 09:08:43.218: INFO: Waiting up to 5m0s for PersistentVolume pvc-7ae5680f-dde7-4db9-81a4-dd21f964fb33 to get deleted
Jul  5 09:08:43.247: INFO: PersistentVolume pvc-7ae5680f-dde7-4db9-81a4-dd21f964fb33 was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-2883
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":23,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:18.636: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-secret-q5fm
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 09:09:00.113: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-q5fm" in namespace "subpath-5590" to be "Succeeded or Failed"
Jul  5 09:09:00.143: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Pending", Reason="", readiness=false. Elapsed: 29.884196ms
Jul  5 09:09:02.174: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Running", Reason="", readiness=true. Elapsed: 2.061046892s
Jul  5 09:09:04.204: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Running", Reason="", readiness=true. Elapsed: 4.091057931s
Jul  5 09:09:06.235: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Running", Reason="", readiness=true. Elapsed: 6.122255138s
Jul  5 09:09:08.267: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Running", Reason="", readiness=true. Elapsed: 8.153500024s
Jul  5 09:09:10.298: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Running", Reason="", readiness=true. Elapsed: 10.185064361s
Jul  5 09:09:12.329: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Running", Reason="", readiness=true. Elapsed: 12.216407398s
Jul  5 09:09:14.363: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Running", Reason="", readiness=true. Elapsed: 14.250049719s
Jul  5 09:09:16.394: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Running", Reason="", readiness=true. Elapsed: 16.280480152s
Jul  5 09:09:18.424: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Running", Reason="", readiness=true. Elapsed: 18.31140124s
Jul  5 09:09:20.455: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Running", Reason="", readiness=true. Elapsed: 20.342370507s
Jul  5 09:09:22.488: INFO: Pod "pod-subpath-test-secret-q5fm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.374591721s
STEP: Saw pod success
Jul  5 09:09:22.488: INFO: Pod "pod-subpath-test-secret-q5fm" satisfied condition "Succeeded or Failed"
Jul  5 09:09:22.518: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod pod-subpath-test-secret-q5fm container test-container-subpath-secret-q5fm: <nil>
STEP: delete the pod
Jul  5 09:09:22.883: INFO: Waiting for pod pod-subpath-test-secret-q5fm to disappear
Jul  5 09:09:22.913: INFO: Pod pod-subpath-test-secret-q5fm no longer exists
STEP: Deleting pod pod-subpath-test-secret-q5fm
Jul  5 09:09:22.913: INFO: Deleting pod "pod-subpath-test-secret-q5fm" in namespace "subpath-5590"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":97,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:23.015: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[It] should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating server pod server in namespace prestop-7864
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-7864
STEP: Deleting pre-stop pod
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
Jul  5 09:09:23.992: FAIL: validating pre-stop.
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 21 lines ...
Jul  5 09:09:24.063: INFO: At 2021-07-05 09:07:34 +0000 UTC - event for server: {kubelet ip-172-20-57-184.us-east-2.compute.internal} Started: Started container agnhost-container
Jul  5 09:09:24.063: INFO: At 2021-07-05 09:07:39 +0000 UTC - event for tester: {default-scheduler } Scheduled: Successfully assigned prestop-7864/tester to ip-172-20-55-216.us-east-2.compute.internal
Jul  5 09:09:24.063: INFO: At 2021-07-05 09:07:41 +0000 UTC - event for tester: {kubelet ip-172-20-55-216.us-east-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Jul  5 09:09:24.063: INFO: At 2021-07-05 09:07:41 +0000 UTC - event for tester: {kubelet ip-172-20-55-216.us-east-2.compute.internal} Created: Created container tester
Jul  5 09:09:24.063: INFO: At 2021-07-05 09:07:41 +0000 UTC - event for tester: {kubelet ip-172-20-55-216.us-east-2.compute.internal} Started: Started container tester
Jul  5 09:09:24.063: INFO: At 2021-07-05 09:07:43 +0000 UTC - event for tester: {kubelet ip-172-20-55-216.us-east-2.compute.internal} Killing: Stopping container tester
Jul  5 09:09:24.063: INFO: At 2021-07-05 09:08:16 +0000 UTC - event for tester: {kubelet ip-172-20-55-216.us-east-2.compute.internal} FailedPreStopHook: Exec lifecycle hook ([wget -O- --post-data={"Source": "prestop"} http://100.96.2.88:8080/write]) for Container "tester" in Pod "tester_prestop-7864(a29703f1-f609-4643-809f-fe6f9c597aab)" failed - error: command 'wget -O- --post-data={"Source": "prestop"} http://100.96.2.88:8080/write' exited with 137: Connecting to 100.96.2.88:8080 (100.96.2.88:8080)
, message: "Connecting to 100.96.2.88:8080 (100.96.2.88:8080)\n"
Jul  5 09:09:24.063: INFO: At 2021-07-05 09:09:24 +0000 UTC - event for server: {kubelet ip-172-20-57-184.us-east-2.compute.internal} Killing: Stopping container agnhost-container
Jul  5 09:09:24.093: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 09:09:24.093: INFO: 
Jul  5 09:09:24.126: INFO: 
Logging node info for node ip-172-20-38-136.us-east-2.compute.internal
... skipping 237 lines ...
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should call prestop when killing a pod  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:09:23.992: validating pre-stop.
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151
------------------------------
{"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":9,"skipped":90,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":77,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:244.109 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:09:25.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul  5 09:09:26.031: INFO: Waiting up to 5m0s for pod "security-context-13bc3780-4ebc-4890-930a-d6be2dfe4916" in namespace "security-context-7306" to be "Succeeded or Failed"
Jul  5 09:09:26.062: INFO: Pod "security-context-13bc3780-4ebc-4890-930a-d6be2dfe4916": Phase="Pending", Reason="", readiness=false. Elapsed: 30.378418ms
Jul  5 09:09:28.092: INFO: Pod "security-context-13bc3780-4ebc-4890-930a-d6be2dfe4916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061107418s
STEP: Saw pod success
Jul  5 09:09:28.093: INFO: Pod "security-context-13bc3780-4ebc-4890-930a-d6be2dfe4916" satisfied condition "Succeeded or Failed"
Jul  5 09:09:28.123: INFO: Trying to get logs from node ip-172-20-38-136.us-east-2.compute.internal pod security-context-13bc3780-4ebc-4890-930a-d6be2dfe4916 container test-container: <nil>
STEP: delete the pod
Jul  5 09:09:28.191: INFO: Waiting for pod security-context-13bc3780-4ebc-4890-930a-d6be2dfe4916 to disappear
Jul  5 09:09:28.221: INFO: Pod security-context-13bc3780-4ebc-4890-930a-d6be2dfe4916 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:09:28.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-7306" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":97,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:28.299: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 40 lines ...
Jul  5 09:09:23.356: INFO: PersistentVolumeClaim pvc-b8fvq found but phase is Pending instead of Bound.
Jul  5 09:09:25.387: INFO: PersistentVolumeClaim pvc-b8fvq found and phase=Bound (4.093621608s)
Jul  5 09:09:25.387: INFO: Waiting up to 3m0s for PersistentVolume local-th24t to have phase Bound
Jul  5 09:09:25.417: INFO: PersistentVolume local-th24t found and phase=Bound (29.701616ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-qkgb
STEP: Creating a pod to test exec-volume-test
Jul  5 09:09:25.510: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-qkgb" in namespace "volume-1981" to be "Succeeded or Failed"
Jul  5 09:09:25.540: INFO: Pod "exec-volume-test-preprovisionedpv-qkgb": Phase="Pending", Reason="", readiness=false. Elapsed: 29.716826ms
Jul  5 09:09:27.572: INFO: Pod "exec-volume-test-preprovisionedpv-qkgb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061581432s
STEP: Saw pod success
Jul  5 09:09:27.572: INFO: Pod "exec-volume-test-preprovisionedpv-qkgb" satisfied condition "Succeeded or Failed"
Jul  5 09:09:27.602: INFO: Trying to get logs from node ip-172-20-52-221.us-east-2.compute.internal pod exec-volume-test-preprovisionedpv-qkgb container exec-container-preprovisionedpv-qkgb: <nil>
STEP: delete the pod
Jul  5 09:09:27.667: INFO: Waiting for pod exec-volume-test-preprovisionedpv-qkgb to disappear
Jul  5 09:09:27.697: INFO: Pod exec-volume-test-preprovisionedpv-qkgb no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-qkgb
Jul  5 09:09:27.697: INFO: Deleting pod "exec-volume-test-preprovisionedpv-qkgb" in namespace "volume-1981"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":31,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:28.460: INFO: Only supported for providers [azure] (not aws)
... skipping 41 lines ...
STEP: Destroying namespace "services-9597" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":11,"skipped":102,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:09:26.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul  5 09:09:26.989: INFO: Waiting up to 5m0s for pod "downward-api-c89ad082-f386-46bb-8073-1eee494db558" in namespace "downward-api-5337" to be "Succeeded or Failed"
Jul  5 09:09:27.019: INFO: Pod "downward-api-c89ad082-f386-46bb-8073-1eee494db558": Phase="Pending", Reason="", readiness=false. Elapsed: 29.922832ms
Jul  5 09:09:29.051: INFO: Pod "downward-api-c89ad082-f386-46bb-8073-1eee494db558": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062269402s
STEP: Saw pod success
Jul  5 09:09:29.051: INFO: Pod "downward-api-c89ad082-f386-46bb-8073-1eee494db558" satisfied condition "Succeeded or Failed"
Jul  5 09:09:29.081: INFO: Trying to get logs from node ip-172-20-38-136.us-east-2.compute.internal pod downward-api-c89ad082-f386-46bb-8073-1eee494db558 container dapi-container: <nil>
STEP: delete the pod
Jul  5 09:09:29.155: INFO: Waiting for pod downward-api-c89ad082-f386-46bb-8073-1eee494db558 to disappear
Jul  5 09:09:29.185: INFO: Pod downward-api-c89ad082-f386-46bb-8073-1eee494db558 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:09:29.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5337" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":78,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:29.276: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":42,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:08:32.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
• [SLOW TEST:60.274 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":42,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:32.535: INFO: Only supported for providers [gce gke] (not aws)
... skipping 98 lines ...
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0705 09:04:33.952637   12547 metrics_grabber.go:115] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
W0705 09:04:33.952698   12547 metrics_grabber.go:118] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul  5 09:09:34.013: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Jul  5 09:09:34.014: INFO: Deleting pod "simpletest.rc-65clv" in namespace "gc-9003"
Jul  5 09:09:34.049: INFO: Deleting pod "simpletest.rc-862cp" in namespace "gc-9003"
Jul  5 09:09:34.083: INFO: Deleting pod "simpletest.rc-9mkn6" in namespace "gc-9003"
Jul  5 09:09:34.120: INFO: Deleting pod "simpletest.rc-jmbqn" in namespace "gc-9003"
Jul  5 09:09:34.156: INFO: Deleting pod "simpletest.rc-jq6gm" in namespace "gc-9003"
Jul  5 09:09:34.204: INFO: Deleting pod "simpletest.rc-k6prf" in namespace "gc-9003"
... skipping 10 lines ...
• [SLOW TEST:341.087 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":2,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:34.474: INFO: Driver local doesn't support ext4 -- skipping
... skipping 134 lines ...
I0705 09:07:10.891866   12571 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5011, replica count: 2
I0705 09:07:13.942806   12571 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 09:07:16.943733   12571 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  5 09:07:16.943: INFO: Creating new exec pod
Jul  5 09:07:26.043: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:07:31.517: INFO: rc: 1
Jul  5 09:07:31.517: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:32.517: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:07:37.955: INFO: rc: 1
Jul  5 09:07:37.955: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:38.517: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:07:43.955: INFO: rc: 1
Jul  5 09:07:43.955: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:44.517: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:07:49.963: INFO: rc: 1
Jul  5 09:07:49.963: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:50.517: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:07:55.944: INFO: rc: 1
Jul  5 09:07:55.944: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:07:56.518: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:08:01.953: INFO: rc: 1
Jul  5 09:08:01.953: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:08:02.518: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:08:07.953: INFO: rc: 1
Jul  5 09:08:07.953: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:08:08.518: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:08:13.967: INFO: rc: 1
Jul  5 09:08:13.967: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:08:14.517: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:08:19.980: INFO: rc: 1
Jul  5 09:08:19.980: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + nc -vecho -t hostName -w
 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:08:20.518: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:08:25.958: INFO: rc: 1
Jul  5 09:08:25.958: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:08:26.518: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:08:31.931: INFO: rc: 1
Jul  5 09:08:31.931: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:08:32.518: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:08:37.998: INFO: rc: 1
Jul  5 09:08:37.998: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:08:38.518: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:08:43.942: INFO: rc: 1
Jul  5 09:08:43.943: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:08:44.517: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:08:49.977: INFO: rc: 1
Jul  5 09:08:49.977: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:08:50.518: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:08:55.950: INFO: rc: 1
Jul  5 09:08:55.951: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:08:56.517: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:09:01.946: INFO: rc: 1
Jul  5 09:09:01.947: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:09:02.518: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:09:08.028: INFO: rc: 1
Jul  5 09:09:08.028: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:09:08.518: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:09:13.942: INFO: rc: 1
Jul  5 09:09:13.942: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:09:14.517: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:09:19.949: INFO: rc: 1
Jul  5 09:09:19.949: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:09:20.518: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:09:25.950: INFO: rc: 1
Jul  5 09:09:25.950: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:09:26.517: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:09:31.942: INFO: rc: 1
Jul  5 09:09:31.942: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:09:31.942: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 09:09:37.380: INFO: rc: 1
Jul  5 09:09:37.380: INFO: Service reachability failing with error: error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5011 exec execpodwjw5j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 09:09:37.381: FAIL: Unexpected error:
    <*errors.errorString | 0xc001eaa5a0>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
occurred

... skipping 257 lines ...
• Failure [148.606 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:09:37.381: Unexpected error:
      <*errors.errorString | 0xc001eaa5a0>: {
          s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1330
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":5,"skipped":52,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:5.009 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":58,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:39.570: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:09:40.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-8846" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":4,"skipped":74,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:40.151: INFO: Only supported for providers [gce gke] (not aws)
... skipping 35 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:07:45.578: INFO: >>> kubeConfig: /root/.kube/config
... skipping 137 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  5 09:09:40.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa854207-2373-40f3-9058-a12d38b8c7ec" in namespace "projected-8296" to be "Succeeded or Failed"
Jul  5 09:09:40.388: INFO: Pod "downwardapi-volume-fa854207-2373-40f3-9058-a12d38b8c7ec": Phase="Pending", Reason="", readiness=false. Elapsed: 29.981324ms
Jul  5 09:09:42.419: INFO: Pod "downwardapi-volume-fa854207-2373-40f3-9058-a12d38b8c7ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060726242s
STEP: Saw pod success
Jul  5 09:09:42.419: INFO: Pod "downwardapi-volume-fa854207-2373-40f3-9058-a12d38b8c7ec" satisfied condition "Succeeded or Failed"
Jul  5 09:09:42.449: INFO: Trying to get logs from node ip-172-20-38-136.us-east-2.compute.internal pod downwardapi-volume-fa854207-2373-40f3-9058-a12d38b8c7ec container client-container: <nil>
STEP: delete the pod
Jul  5 09:09:42.518: INFO: Waiting for pod downwardapi-volume-fa854207-2373-40f3-9058-a12d38b8c7ec to disappear
Jul  5 09:09:42.549: INFO: Pod downwardapi-volume-fa854207-2373-40f3-9058-a12d38b8c7ec no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:09:42.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8296" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":82,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Jul  5 09:09:37.816: INFO: PersistentVolumeClaim pvc-l9gsq found but phase is Pending instead of Bound.
Jul  5 09:09:39.848: INFO: PersistentVolumeClaim pvc-l9gsq found and phase=Bound (12.214934906s)
Jul  5 09:09:39.848: INFO: Waiting up to 3m0s for PersistentVolume local-szm4b to have phase Bound
Jul  5 09:09:39.878: INFO: PersistentVolume local-szm4b found and phase=Bound (30.589792ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wsnx
STEP: Creating a pod to test subpath
Jul  5 09:09:39.971: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wsnx" in namespace "provisioning-3951" to be "Succeeded or Failed"
Jul  5 09:09:40.002: INFO: Pod "pod-subpath-test-preprovisionedpv-wsnx": Phase="Pending", Reason="", readiness=false. Elapsed: 30.853813ms
Jul  5 09:09:42.033: INFO: Pod "pod-subpath-test-preprovisionedpv-wsnx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062132984s
Jul  5 09:09:44.064: INFO: Pod "pod-subpath-test-preprovisionedpv-wsnx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092844637s
Jul  5 09:09:46.096: INFO: Pod "pod-subpath-test-preprovisionedpv-wsnx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124299082s
STEP: Saw pod success
Jul  5 09:09:46.096: INFO: Pod "pod-subpath-test-preprovisionedpv-wsnx" satisfied condition "Succeeded or Failed"
Jul  5 09:09:46.126: INFO: Trying to get logs from node ip-172-20-57-184.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-wsnx container test-container-subpath-preprovisionedpv-wsnx: <nil>
STEP: delete the pod
Jul  5 09:09:46.195: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wsnx to disappear
Jul  5 09:09:46.225: INFO: Pod pod-subpath-test-preprovisionedpv-wsnx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wsnx
Jul  5 09:09:46.225: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wsnx" in namespace "provisioning-3951"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":16,"skipped":100,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:09:47.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1862" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":83,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:47.331: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 90 lines ...
Jul  5 09:09:38.021: INFO: PersistentVolumeClaim pvc-j7wbf found but phase is Pending instead of Bound.
Jul  5 09:09:40.051: INFO: PersistentVolumeClaim pvc-j7wbf found and phase=Bound (8.156143873s)
Jul  5 09:09:40.051: INFO: Waiting up to 3m0s for PersistentVolume local-hvhtq to have phase Bound
Jul  5 09:09:40.081: INFO: PersistentVolume local-hvhtq found and phase=Bound (29.610126ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-h9vx
STEP: Creating a pod to test subpath
Jul  5 09:09:40.173: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-h9vx" in namespace "provisioning-225" to be "Succeeded or Failed"
Jul  5 09:09:40.202: INFO: Pod "pod-subpath-test-preprovisionedpv-h9vx": Phase="Pending", Reason="", readiness=false. Elapsed: 29.627204ms
Jul  5 09:09:42.233: INFO: Pod "pod-subpath-test-preprovisionedpv-h9vx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059756576s
Jul  5 09:09:44.266: INFO: Pod "pod-subpath-test-preprovisionedpv-h9vx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09278992s
STEP: Saw pod success
Jul  5 09:09:44.266: INFO: Pod "pod-subpath-test-preprovisionedpv-h9vx" satisfied condition "Succeeded or Failed"
Jul  5 09:09:44.295: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-h9vx container test-container-subpath-preprovisionedpv-h9vx: <nil>
STEP: delete the pod
Jul  5 09:09:44.366: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-h9vx to disappear
Jul  5 09:09:44.395: INFO: Pod pod-subpath-test-preprovisionedpv-h9vx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-h9vx
Jul  5 09:09:44.396: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-h9vx" in namespace "provisioning-225"
STEP: Creating pod pod-subpath-test-preprovisionedpv-h9vx
STEP: Creating a pod to test subpath
Jul  5 09:09:44.455: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-h9vx" in namespace "provisioning-225" to be "Succeeded or Failed"
Jul  5 09:09:44.485: INFO: Pod "pod-subpath-test-preprovisionedpv-h9vx": Phase="Pending", Reason="", readiness=false. Elapsed: 29.594658ms
Jul  5 09:09:46.515: INFO: Pod "pod-subpath-test-preprovisionedpv-h9vx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059498588s
STEP: Saw pod success
Jul  5 09:09:46.515: INFO: Pod "pod-subpath-test-preprovisionedpv-h9vx" satisfied condition "Succeeded or Failed"
Jul  5 09:09:46.549: INFO: Trying to get logs from node ip-172-20-55-216.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-h9vx container test-container-subpath-preprovisionedpv-h9vx: <nil>
STEP: delete the pod
Jul  5 09:09:46.613: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-h9vx to disappear
Jul  5 09:09:46.643: INFO: Pod pod-subpath-test-preprovisionedpv-h9vx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-h9vx
Jul  5 09:09:46.643: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-h9vx" in namespace "provisioning-225"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":34,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:47.929: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:241
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":6,"skipped":14,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:09:42.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 42 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl copy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1355
    should copy a file from a running Pod
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1372
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":7,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:48.138: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:09:48.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-4567" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":8,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:48.602: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
Jul  5 09:09:39.081: INFO: PersistentVolumeClaim pvc-nngn6 found but phase is Pending instead of Bound.
Jul  5 09:09:41.112: INFO: PersistentVolumeClaim pvc-nngn6 found and phase=Bound (10.182680847s)
Jul  5 09:09:41.112: INFO: Waiting up to 3m0s for PersistentVolume local-j49kz to have phase Bound
Jul  5 09:09:41.142: INFO: PersistentVolume local-j49kz found and phase=Bound (29.707175ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8k4t
STEP: Creating a pod to test subpath
Jul  5 09:09:41.232: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8k4t" in namespace "provisioning-3090" to be "Succeeded or Failed"
Jul  5 09:09:41.262: INFO: Pod "pod-subpath-test-preprovisionedpv-8k4t": Phase="Pending", Reason="", readiness=false. Elapsed: 30.592434ms
Jul  5 09:09:43.293: INFO: Pod "pod-subpath-test-preprovisionedpv-8k4t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061669837s
Jul  5 09:09:45.324: INFO: Pod "pod-subpath-test-preprovisionedpv-8k4t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092113675s
Jul  5 09:09:47.354: INFO: Pod "pod-subpath-test-preprovisionedpv-8k4t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122035126s
STEP: Saw pod success
Jul  5 09:09:47.354: INFO: Pod "pod-subpath-test-preprovisionedpv-8k4t" satisfied condition "Succeeded or Failed"
Jul  5 09:09:47.383: INFO: Trying to get logs from node ip-172-20-52-221.us-east-2.compute.internal pod pod-subpath-test-preprovisionedpv-8k4t container test-container-volume-preprovisionedpv-8k4t: <nil>
STEP: delete the pod
Jul  5 09:09:47.458: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8k4t to disappear
Jul  5 09:09:47.487: INFO: Pod pod-subpath-test-preprovisionedpv-8k4t no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8k4t
Jul  5 09:09:47.487: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8k4t" in namespace "provisioning-3090"
... skipping 26 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:48.631: INFO: Only supported for providers [vsphere] (not aws)
... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:09:48.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-4943" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":17,"skipped":107,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-windows] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jul  5 09:09:48.882: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 40 lines ...
STEP: Destroying namespace "services-9124" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":6,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:53.585: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:09:53.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3618" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":7,"skipped":36,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:09:53.964: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 164 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul  5 09:09:54.298: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  5 09:09:54.329: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-dmv8
STEP: Creating a pod to test subpath
Jul  5 09:09:54.362: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-dmv8" in namespace "provisioning-2581" to be "Succeeded or Failed"
Jul  5 09:09:54.391: INFO: Pod "pod-subpath-test-inlinevolume-dmv8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.514715ms
Jul  5 09:09:56.421: INFO: Pod "pod-subpath-test-inlinevolume-dmv8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059446567s
Jul  5 09:09:58.463: INFO: Pod "pod-subpath-test-inlinevolume-dmv8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101621508s
STEP: Saw pod success
Jul  5 09:09:58.463: INFO: Pod "pod-subpath-test-inlinevolume-dmv8" satisfied condition "Succeeded or Failed"
Jul  5 09:09:58.493: INFO: Trying to get logs from node ip-172-20-38-136.us-east-2.compute.internal pod pod-subpath-test-inlinevolume-dmv8 container test-container-volume-inlinevolume-dmv8: <nil>
STEP: delete the pod
Jul  5 09:09:58.557: INFO: Waiting for pod pod-subpath-test-inlinevolume-dmv8 to disappear
Jul  5 09:09:58.590: INFO: Pod pod-subpath-test-inlinevolume-dmv8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-dmv8
Jul  5 09:09:58.590: INFO: Deleting pod "pod-subpath-test-inlinevolume-dmv8" in namespace "provisioning-2581"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:09:58.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2581" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":76,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 72371 lines ...






st-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:20.191036       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-8972/downwardapi-volume-64f086ff-7b98-4398-9237-142529efb27b\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:21.644151       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-6639/test-deployment-d4dfddfbf-4s4l6\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:22.263279       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9204/pod-submit-status-1-12\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:22.333858       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9204/pod-submit-status-2-11\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:23.744854       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-18/security-context-efc7a882-356b-4363-bedc-bf6fc97aac30\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:24.073456       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2139/hostexec-ip-172-20-38-136.us-east-2.compute.internal-rtr5f\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:24.132722       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9204/pod-submit-status-0-13\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:24.810512       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-8340/pod-configmaps-5d2ed8e0-1d3e-4100-b3dc-e020cb1d7da0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:26.917689       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9204/pod-submit-status-2-12\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:28.086381       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9204/pod-submit-status-2-13\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:28.543523       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-4663-5248/csi-hostpathplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:28.919777       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9204/pod-submit-status-1-13\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:29.077866       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-5828/metadata-volume-21d4a0de-9334-4b3d-8573-692985f5c1df\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:30.767869       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-4663/hostpath-injector\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:31.415022       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3416/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:31.443819       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3416/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:31.475356       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3416/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:31.504254       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3416/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:31.809327       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-5864/exceed-active-deadline--1-xchrn\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:31.812194       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-5864/exceed-active-deadline--1-vk672\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:33.697876       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"prestop-7864/server\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:34.242380       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-4060/frontend-685fc574d5-9tgg2\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:34.266670       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-4060/frontend-685fc574d5-gt2t8\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:34.274961       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-4060/frontend-685fc574d5-w4xvl\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:34.566746       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-4060/agnhost-primary-5db8ddd565-bm5rh\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:34.903459       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-4060/agnhost-replica-6bcf79b489-7f9fw\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:34.932117       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-4060/agnhost-replica-6bcf79b489-fjnv8\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:35.329407       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9204/pod-submit-status-0-14\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:36.304130       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-8378-2502/csi-hostpathplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:36.335632       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-8378/inline-volume-tester-l562f\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:37.668546       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8432/pod-subpath-test-inlinevolume-vjkx\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:39.820236       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"prestop-7864/tester\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:40.840520       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2139/pod-subpath-test-preprovisionedpv-qv95\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:41.173886       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9204/pod-submit-status-2-14\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:41.273531       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-999/security-context-20c98a4d-0cc8-40a3-b6fc-52ec8f506124\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:42.202349       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9204/pod-submit-status-1-14\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:44.272021       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-3146/test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:44.697108       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8301-8020/csi-mockplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:44.754062       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8301-8020/csi-mockplugin-attacher-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:47.274137       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-2042-9389/csi-hostpathplugin-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:48.700215       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-6870/liveness-8af6382e-338a-4f2b-b2f1-c33f0a2cdb19\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:49.516275       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-2042/pod-a99c8c8f-2e2d-4cf3-8423-a1912106256f\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:51.040284       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-652/ss-1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:51.581672       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-4692/hostexec-ip-172-20-52-221.us-east-2.compute.internal-cr85x\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:52.571937       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-3146/test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:52.628184       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-4663/hostpath-client\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:55.837399       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3416/test-container-pod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:55.869921       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3416/host-test-container-pod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:07:56.001542       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8301/pvc-volume-tester-t54b5\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:56.479667       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-4692/local-injector\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:07:56.821774       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-3146/test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:01.069920       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-3146/test-pod-bfcc8c83-1c16-4e7b-94d0-cb1c115f8f37\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:02.664648       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-2883-9342/csi-hostpathplugin-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:08:05.212218       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-8654/security-context-e8c18a5f-548b-47ed-aa1f-4e9abeb37ddb\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:07.668034       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-test-1812/busybox-scheduling-079b0968-5e72-472b-84a9-e4f8650fe29e\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:14.151664       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-4692/local-client\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:08:29.148097       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-7045/pod-06e03133-fe65-4c6e-9fdc-f7d665ab40e3\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:32.309318       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-807/httpd-deployment-8584777d8-pxpkm\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:32.401472       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-5831/test-webserver-ec445f3a-cb4e-4ba4-bc69-45c7c9f12a8f\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:33.365023       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-8930/agnhost-primary-j996n\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:34.767891       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-6641/metadata-volume-c93fea5f-682d-4583-b8b3-f7721e12cb17\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:34.996401       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:35.054231       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:36.141338       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:36.142176       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:37.229880       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:37.266536       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replicaset-8375/test-rs-pxps6\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:37.612833       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-938/pod-projected-secrets-830067a5-477f-4d99-b04d-5443c593105b\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:39.117764       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:39.118264       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:39.118623       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:39.413564       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replicaset-8375/test-rs-hddpc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:39.483326       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replicaset-8375/test-rs-s96p6\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:39.499924       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replicaset-8375/test-rs-qg2tc\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:39.878281       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-32/hostexec-ip-172-20-55-216.us-east-2.compute.internal-g5drn\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:08:40.132900       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2149/hostexec-ip-172-20-55-216.us-east-2.compute.internal-jnfws\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:08:41.552397       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-5511/exec-volume-test-preprovisionedpv-82dq\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:41.782572       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:42.353696       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:42.701911       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-1192/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:08:42.731926       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-1192/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:08:42.750292       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-32/pod-4317313d-b4ec-4648-a6c7-af8d04019213\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:08:42.763491       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-1192/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:08:42.795206       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-1192/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:08:43.565463       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:43.565901       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:44.676531       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:46.566591       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-7137/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0705 09:08:48.854962       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"init-container-2197/pod-init-bff57805-a59c-4878-a4c1-726eb9b4ee80\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:08:53.139834       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8304/hostexec-ip-172-20-55-216.us-east-2.compute.internal-d9vvn\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:08:54.870448       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2149/pod-subpath-test-preprovisionedpv-9qgz\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:00.098474       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"subpath-5590/pod-subpath-test-secret-q5fm\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:03.090006       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-1192/test-container-pod\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:04.130463       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9176/hostexec-ip-172-20-57-184.us-east-2.compute.internal-hld88\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:06.681183       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9176/pod-1ed98e06-04c1-432c-9bd8-a2d6ca91d321\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:10.701871       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-9304/pod-projected-configmaps-9ea9350c-1283-4ca3-b25e-174a2993cf7b\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:10.763776       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8304/pod-subpath-test-preprovisionedpv-tzvl\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:14.322324       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8623-8658/csi-mockplugin-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:16.690368       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3025/hostexec-ip-172-20-57-184.us-east-2.compute.internal-4nbsk\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:18.885207       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-1981/hostexec-ip-172-20-52-221.us-east-2.compute.internal-nqcrr\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:19.224639       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3025/pod-62e24930-7455-41bb-b0c5-b56297e92f06\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:21.917460       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3025/pod-e195bbab-4bc1-427a-b9f2-9081b8fd2af8\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:23.243166       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3951/hostexec-ip-172-20-57-184.us-east-2.compute.internal-tkl2h\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:25.111953       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8623/pvc-volume-tester-nkkkn\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:25.493513       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-1981/exec-volume-test-preprovisionedpv-qkgb\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:26.016979       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-7306/security-context-13bc3780-4ebc-4890-930a-d6be2dfe4916\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:26.974578       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-5337/downward-api-c89ad082-f386-46bb-8073-1eee494db558\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:28.088973       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3090/hostexec-ip-172-20-52-221.us-east-2.compute.internal-mrnrz\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:28.684509       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-225/hostexec-ip-172-20-55-216.us-east-2.compute.internal-vqlng\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:30.270906       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9729-3238/csi-mockplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:30.319272       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9729-3238/csi-mockplugin-attacher-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:32.838799       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6299/hostexec-ip-172-20-57-184.us-east-2.compute.internal-5vbvk\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:34.697362       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-6352/annotationupdate3eff6a59-aa36-4173-a5fd-b073740fe724\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:35.586327       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9729/pvc-volume-tester-kznxx\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:36.041883       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-43/externalip-test-gxsdw\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:36.069692       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-43/externalip-test-c6rq2\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:39.117332       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-43/execpodtnngk\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:39.454717       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-180/ss2-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:39.961693       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3951/pod-subpath-test-preprovisionedpv-wsnx\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:40.157361       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-225/pod-subpath-test-preprovisionedpv-h9vx\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:40.342852       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-8296/downwardapi-volume-fa854207-2373-40f3-9058-a12d38b8c7ec\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:41.218840       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3090/pod-subpath-test-preprovisionedpv-8k4t\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:42.300052       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-180/ss2-1\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:42.808771       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-1862/test-rolling-update-controller-5tn46\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:42.986868       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-6541/busybox1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:43.983871       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-180/ss2-2\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:44.449504       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-225/pod-subpath-test-preprovisionedpv-h9vx\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:44.949783       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-1862/test-rolling-update-deployment-585b757574-zm5bw\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:47.607840       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8399/hostexec-ip-172-20-57-184.us-east-2.compute.internal-vwfqt\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:48.928213       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-9124/hostexec\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:49.747086       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-8936/sample-webhook-deployment-78988fc6cd-prd7w\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:09:54.346716       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2581/pod-subpath-test-inlinevolume-dmv8\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:56.296032       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8399/pod-subpath-test-preprovisionedpv-jbm6\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:09:58.986744       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-758/hostexec-ip-172-20-57-184.us-east-2.compute.internal-s89hg\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:00.121351       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-1045/simple-27091270--1-k7qks\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:00.574228       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8399/pod-subpath-test-preprovisionedpv-jbm6\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:02.531710       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"secrets-9172/pod-secrets-520b49a4-18eb-4c95-9715-ed610ef33bb2\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:03.461414       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"port-forwarding-2964/pfpod\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:04.983508       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-runtime-7316/termination-message-container7faff824-f907-4663-933c-cde536c54c05\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:09.077262       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7234/hostexec-ip-172-20-55-216.us-east-2.compute.internal-p4brz\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:11.016531       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-180/ss2-2\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:11.218088       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-1059/ss2-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:12.088699       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-1059/ss2-1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:12.121312       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-652/ss-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:14.636749       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-1059/ss2-2\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:15.058944       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-8063/sample-webhook-deployment-78988fc6cd-5gbxl\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:15.154030       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-runtime-2724/image-pull-test2b142d8b-2f77-4d6f-b85a-fd41931c628e\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:23.837798       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-2878/liveness-25619f94-b171-4cc2-81cb-a27a08dffd35\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:24.062379       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-180/ss2-1\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:25.847329       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7234/pod-subpath-test-preprovisionedpv-xp5q\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:28.661111       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3348/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:28.691279       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3348/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:28.722724       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3348/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:28.754675       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3348/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:30.746599       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-test-1374/explicit-root-uid\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:31.005744       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-1059/ss2-2\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:31.724393       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-1059/ss2-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:32.152515       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-180/ss2-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:32.649550       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2595/hostexec-ip-172-20-38-136.us-east-2.compute.internal-sjhz4\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:35.112921       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1313/hostexec-ip-172-20-52-221.us-east-2.compute.internal-b2f6n\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:37.633042       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1313/pod-bd8265b4-bdde-4ad2-aa6b-33f846b81150\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:40.285698       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1313/pod-fada1bf7-1f49-47b9-96b5-badfb4a1b092\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:41.277814       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2595/pod-subpath-test-preprovisionedpv-ktbt\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:41.997495       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-3439/pod-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:42.026005       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-3439/pod-1\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:42.072379       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-3439/pod-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:42.256080       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-652/ss-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:44.226293       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-1059/ss2-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:46.463908       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-5856/all-succeed--1-9jd5d\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:46.464475       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-5856/all-succeed--1-786ks\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:46.950356       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8943/pod-subpath-test-inlinevolume-bsmm\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:47.037659       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-9124/hostexec-ip-172-20-38-136.us-east-2.compute.internal-ll6r6\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:10:48.820114       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-5856/all-succeed--1-6z2bq\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:49.218181       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-5856/all-succeed--1-f6jpv\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:51.094206       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3348/test-container-pod\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:51.124305       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3348/host-test-container-pod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:53.220194       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-4491/httpd\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:10:56.490488       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-9124/local-injector\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:11:01.023007       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-180/ss2-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:02.156539       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-1059/ss2-1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:02.802322       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-652/ss-1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:05.598566       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-4491/success\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:06.416813       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-9648/test-pod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:08.349161       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"init-container-6231/pod-init-7cbc248f-2553-46a5-b48c-774f19ac8341\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:08.536886       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-9648/test-host-network-pod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:14.038443       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-180/ss2-1\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:14.199831       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-6487/pause\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:14.500022       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replication-controller-4316/condition-test-57vzm\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:14.513857       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replication-controller-4316/condition-test-qvwsc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:15.989980       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-6224/pod-projected-configmaps-edee5b59-f380-4207-a938-5303828fb21a\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:16.025125       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"subpath-9534/pod-subpath-test-downwardapi-rw2l\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:16.033612       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-9124/local-client\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:11:17.844097       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-7006/pod-configmaps-5c7d2c2d-20ba-4590-bdec-b7b487cd208e\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:18.481059       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3736/hostexec-ip-172-20-55-216.us-east-2.compute.internal-jzljs\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:11:22.154972       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-180/ss2-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:24.040190       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-1059/ss2-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:25.108772       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-7313/aws-injector\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:25.134498       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3736/pod-subpath-test-preprovisionedpv-49tk\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:11:31.020594       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"default/recycler-for-nfs-lw4cx\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:36.558154       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-4908/pod-47718bec-95c0-4287-9c11-04372a6cca14\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:36.781038       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-7519/busybox-110bd794-abd8-4006-82e2-033ec44e2771\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:38.880587       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-70/liveness-503e7a68-0e99-4d8d-a2f1-4eb90d86bed4\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:40.081813       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"mounted-volume-expand-8012/deployment-4ca1267d-e645-4ee2-8008-3e63da9df117-7546c6dc57wcd2p\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:11:42.391583       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pv-8948/nfs-server\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:43.368003       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-6336/pod-projected-secrets-d11480ac-27e7-4e49-b804-51f66445626c\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:51.797908       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-6067-1653/csi-hostpathplugin-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:11:52.730900       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-7313/aws-client\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:54.029553       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-6067/hostpath-injector\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:11:55.040327       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pv-8948/pvc-tester-8gg4b\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:11:55.630536       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-4751-873/csi-hostpathplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:11:55.675165       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-4751/inline-volume-tester-9hbxj\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:11:59.899057       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-5051/pod-d67064cb-8d3f-4da8-b711-c105d606221d\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:00.147356       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"cronjob-69/concurrent-27091272--1-pk7mb\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:01.703437       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-1072/pod-configmaps-f0a10b07-7485-45bf-9782-e0c988afc8ed\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:03.033290       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9699/pod-hostip-a4ad2350-d651-420b-aa37-c8db90b011b5\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:05.245189       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8722/pod-subpath-test-dynamicpv-xjbr\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:05.437553       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-9232/pod-d922d69e-fde9-4625-8a26-956ed2431473\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:05.660124       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-1797/sample-webhook-deployment-78988fc6cd-72wng\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:06.448356       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"mounted-volume-expand-8012/deployment-4ca1267d-e645-4ee2-8008-3e63da9df117-7546c6dc57x49mk\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:09.404323       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-5051/pod-2b4196b5-8773-4067-b200-5cd142d07290\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:11.455384       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3658/hostexec-ip-172-20-38-136.us-east-2.compute.internal-jgkl8\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:12.806902       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-521/hostexec-ip-172-20-57-184.us-east-2.compute.internal-9fdpz\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:13.377132       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-452/deployment-shared-map-item-removal-55649fd747-kh66f\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:13.389819       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-452/deployment-shared-map-item-removal-55649fd747-xczlw\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:13.390187       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-452/deployment-shared-map-item-removal-55649fd747-ltd2g\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:14.340930       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"dns-2278/dns-test-662e8d28-57f3-4a37-9655-c05ffc0d4d2d\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:14.460626       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3658/pod-c85d2385-4e03-4506-9f5e-8869f6a8bccc\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:15.499697       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-452/deployment-shared-map-item-removal-55649fd747-scqgl\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:15.965332       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2096/inline-volume-qv97q\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-qv97q-my-volume\\\" not found.\"\nI0705 09:12:17.610125       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-2096-8745/csi-hostpathplugin-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:17.677544       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2096/inline-volume-tester-ttzvl\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-ttzvl-my-volume-0\\\" not found.\"\nI0705 09:12:17.993230       1 volume_binding.go:316] \"Failed to bind volumes for pod\" pod=\"fsgroupchangepolicy-763/pod-edd11e8b-a0d9-4518-9a61-a8468b83662a\" err=\"binding volumes: failed to check provisioning pvc: could not find v1.PersistentVolumeClaim \\\"fsgroupchangepolicy-763/aws5mglz\\\"\"\nE0705 09:12:17.993260       1 framework.go:863] \"Failed running PreBind plugin\" err=\"binding volumes: failed to check provisioning pvc: could not find v1.PersistentVolumeClaim \\\"fsgroupchangepolicy-763/aws5mglz\\\"\" plugin=\"VolumeBinding\" pod=\"fsgroupchangepolicy-763/pod-edd11e8b-a0d9-4518-9a61-a8468b83662a\"\nE0705 09:12:17.993440       1 factory.go:398] \"Error scheduling pod; retrying\" err=\"running PreBind plugin \\\"VolumeBinding\\\": binding volumes: failed to check provisioning pvc: could not find v1.PersistentVolumeClaim \\\"fsgroupchangepolicy-763/aws5mglz\\\"\" pod=\"fsgroupchangepolicy-763/pod-edd11e8b-a0d9-4518-9a61-a8468b83662a\"\nI0705 09:12:18.341305       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"proxy-3789/agnhost\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:18.640845       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-runtime-3784/termination-message-containerc10b0b53-7cad-4e33-8d4e-010873c12144\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:18.704149       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2096/inline-volume-tester-ttzvl\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0705 09:12:19.618603       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-5868/sample-webhook-deployment-78988fc6cd-x6x48\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:19.704618       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"fsgroupchangepolicy-763/pod-edd11e8b-a0d9-4518-9a61-a8468b83662a\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"aws5mglz\\\" not found.\"\nE0705 09:12:19.706872       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-edd11e8b-a0d9-4518-9a61-a8468b83662a.168eda1e6ae74b38\", GenerateName:\"\", Namespace:\"fsgroupchangepolicy-763\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"fsgroupchangepolicy-763\", Name:\"pod-edd11e8b-a0d9-4518-9a61-a8468b83662a\", UID:\"0efc77de-8463-40cf-a59e-2bdaf65b0a13\", APIVersion:\"v1\", ResourceVersion:\"15845\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"aws5mglz\\\" not found.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc030d11cea018d38, ext:807712757441, loc:(*time.Location)(0x320d400)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc030d11cea018d38, ext:807712757441, loc:(*time.Location)(0x320d400)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-edd11e8b-a0d9-4518-9a61-a8468b83662a.168eda1e6ae74b38\" is forbidden: unable to create new content in namespace fsgroupchangepolicy-763 because it is being terminated' (will not retry!)\nI0705 09:12:20.247362       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7444/hostexec-ip-172-20-55-216.us-east-2.compute.internal-t6dkn\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:20.712611       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-2096/inline-volume-tester-ttzvl\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:20.891713       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2905/pod-subpath-test-dynamicpv-5cgz\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:21.706552       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"fsgroupchangepolicy-763/pod-edd11e8b-a0d9-4518-9a61-a8468b83662a\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"aws5mglz\\\" not found.\"\nE0705 09:12:21.710568       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-edd11e8b-a0d9-4518-9a61-a8468b83662a.168eda1e6ae74b38\", GenerateName:\"\", Namespace:\"fsgroupchangepolicy-763\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"fsgroupchangepolicy-763\", Name:\"pod-edd11e8b-a0d9-4518-9a61-a8468b83662a\", UID:\"0efc77de-8463-40cf-a59e-2bdaf65b0a13\", APIVersion:\"v1\", ResourceVersion:\"15911\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"aws5mglz\\\" not found.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc030d11cea018d38, ext:807712757441, loc:(*time.Location)(0x320d400)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc030d11d6a1fd6e2, ext:809714742379, loc:(*time.Location)(0x320d400)}}, Count:2, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-edd11e8b-a0d9-4518-9a61-a8468b83662a.168eda1e6ae74b38\" is forbidden: unable to create new content in namespace fsgroupchangepolicy-763 because it is being terminated' (will not retry!)\nI0705 09:12:23.602851       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7444/pod-780af95b-c3ff-4771-aedb-b960726ef8c3\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:23.722493       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-6067/hostpath-client\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:25.516446       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-521/exec-volume-test-preprovisionedpv-5fcd\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:28.418075       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-160/hostexec-ip-172-20-38-136.us-east-2.compute.internal-fhsg9\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:29.890098       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2493/hostexec-ip-172-20-55-216.us-east-2.compute.internal-9b8sg\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:34.781095       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2493/pod-73ad7924-c417-4157-9db9-69134b4a3909\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:40.170698       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-1529-3626/csi-hostpathplugin-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:41.412355       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-160/pod-subpath-test-preprovisionedpv-n82z\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:44.432209       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-1529/pod-1f2e1fb6-3040-4a86-86c0-c973887d2eec\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:46.891011       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-5103/all-pods-removed--1-r4wbw\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:46.905667       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-5103/all-pods-removed--1-478bd\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:54.604839       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"crd-webhook-1952/sample-crd-conversion-webhook-deployment-697cdbd8f4-nnxcn\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:12:55.821201       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1559/hostexec-ip-172-20-55-216.us-east-2.compute.internal-nshf8\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:12:56.606922       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svc-latency-2581/svc-latency-rc-f8qw9\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:00.145520       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"cronjob-69/concurrent-27091273--1-v5bzx\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:02.143625       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-runtime-6889/image-pull-testbbec24cb-3ef3-46f7-aa2a-680a9ba9b03c\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:03.430113       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-8006/exec-volume-test-inlinevolume-zsrc\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:03.628190       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8359/inline-volume-crmmv\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-crmmv-my-volume\\\" not found.\"\nI0705 09:13:04.600245       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-3059/hostexec-ip-172-20-57-184.us-east-2.compute.internal-xlk7j\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:05.993671       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-5771/pod-size-memory-volume-b91fa569-ff3c-4fd0-bc9c-65ac64d08080\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:06.688967       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-8909/downwardapi-volume-007b8502-08d6-4e7d-b1f6-feb341558969\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:06.781941       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-1529/pod-ef4cb251-0175-4e5c-9011-86dbfd4bb3df\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:07.295474       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-8359-4608/csi-hostpathplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:07.377674       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8359/inline-volume-tester-p4drp\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-p4drp-my-volume-0\\\" not found.\"\nI0705 09:13:09.270467       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-8108/startup-f7dc3017-5683-499a-8963-494888f55a6a\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:09.700217       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-7100/pod-configmaps-a8c2dc9d-37de-499c-ab3e-a46482e3e3dd\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:09.814032       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-8359/inline-volume-tester-p4drp\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:09.892340       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"endpointslice-4915/pod1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:09.917569       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"endpointslice-4915/pod2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:10.620457       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1559/pod-subpath-test-preprovisionedpv-pjkq\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:11.526160       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-3059/exec-volume-test-preprovisionedpv-j6kb\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:12.265389       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-3067/test-deployment-lpsbd-794dd694d8-xcv5w\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:15.719761       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-1165/hostexec-ip-172-20-55-216.us-east-2.compute.internal-k9swj\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:16.239648       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-1402/sample-webhook-deployment-78988fc6cd-ghsbh\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:17.704202       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2026/test-new-deployment-847dcfb7fb-96bgv\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:18.059229       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-1814/busybox-3d34cb39-65b6-4415-a326-79ccc4c9c175\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:19.922968       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2026/test-new-deployment-847dcfb7fb-dwr5l\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:19.989589       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2026/test-new-deployment-847dcfb7fb-c4zbk\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:20.005376       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2026/test-new-deployment-847dcfb7fb-nt25n\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:20.369470       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pv-7182/pod-ephm-test-projected-phv8\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:22.117699       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9327/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0705 09:13:24.489757       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9382/hostexec-ip-172-20-55-216.us-east-2.compute.internal-stkws\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:26.373936       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-1165/pod-c578b70a-1974-45c0-b36e-a00b27efab27\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:27.436673       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9382/pod-fc7635a2-4227-4742-bf31-ecbaed1ee0d5\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:27.887338       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-7662/pod-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:27.920732       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-7662/pod-1\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:28.523161       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-1165/hostexec-ip-172-20-55-216.us-east-2.compute.internal-pgwzc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:28.880872       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-135/startup-636dbc65-9401-4c8a-958d-e4788fe435a7\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:30.512899       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"secrets-9202/pod-secrets-27e47aee-6b57-4778-aca0-f031493c07a6\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:33.618621       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-7255/pod-configmaps-1cb94113-20a5-4974-a744-fe105b9278ba\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:33.999986       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-5111/agnhost-primary-lzc4v\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:34.042155       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-652/ss-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:34.334214       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-5111/agnhost-primary-jfjj4\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:34.934100       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"port-forwarding-5846/pfpod\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:36.024362       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"containers-5510/client-containers-d75cced6-726b-41f9-95cf-834f204168a9\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:36.066237       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"subpath-8844/pod-subpath-test-projected-5nbw\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:38.432393       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-9846/hostexec-ip-172-20-55-216.us-east-2.compute.internal-qxsp2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:40.827100       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-6696/pod-projected-configmaps-2ef0f8ce-4466-416c-95ac-314b9e0828f9\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:44.141337       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-5196/downwardapi-volume-d3f76e84-8988-4afd-8b59-b978526d855f\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:47.503427       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4116/pod-subpath-test-inlinevolume-jg54\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:47.836922       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-6556/pod-configmaps-92dd61cc-66d2-47dd-8781-c3621fd97578\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:50.697626       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"port-forwarding-5641/pfpod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:53.578632       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9511-4379/csi-mockplugin-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:53.642482       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9511-4379/csi-mockplugin-attacher-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:55.175368       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-9846/exec-volume-test-preprovisionedpv-vfcw\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:13:58.309150       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-2286/busybox-ae2e505d-6c2c-48a9-ba36-306440cd6778\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:58.360804       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-65/exec-volume-test-dynamicpv-5h2z\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:13:59.020170       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"aggregator-1821/sample-apiserver-deployment-64f6b9dc99-md5tn\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:00.128348       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"cronjob-2286/replace-27091274--1-kmxjf\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:01.516423       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-269/pod-service-account-defaultsa\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:01.546537       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-269/pod-service-account-mountsa\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:01.576683       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-269/pod-service-account-nomountsa\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:01.607152       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-269/pod-service-account-defaultsa-mountspec\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:01.644374       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-269/pod-service-account-mountsa-mountspec\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:01.672470       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-269/pod-service-account-nomountsa-mountspec\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:01.704031       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-269/pod-service-account-defaultsa-nomountspec\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:01.734277       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-269/pod-service-account-mountsa-nomountspec\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:01.767549       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-269/pod-service-account-nomountsa-nomountspec\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:01.851724       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9511/pvc-volume-tester-5pzq7\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:02.062595       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-7913/hostpath-injector\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:04.783073       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"dns-1897/test-dns-nameservers\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:07.838210       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-8153/metadata-volume-9832c36e-46e8-4490-b707-8a8b06bb76fd\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:10.322772       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-provisioning-2028/glusterdynamic-provisioner-7lnnd\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:11.776693       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-8527/labelsupdate94e3dcfe-9cd1-4aa2-8123-4474beabf106\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:15.529891       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-4207/pod-e201383f-eec9-4754-a6a5-2b83e193d317\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:18.039945       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6357/hostexec-ip-172-20-38-136.us-east-2.compute.internal-7bjcf\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:18.862243       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-2020/pod-6e5731f3-90b8-4c44-a2de-6d265326164e\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:20.005520       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-6029/sample-webhook-deployment-78988fc6cd-txtkv\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:20.619758       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6357/pod-fe93fa54-2430-4680-9d4e-5c34a450fdc4\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:20.852061       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-662/pod-82fc2519-c27e-4acb-b747-604386a5eed3\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:21.418716       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-8495/ss2-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:21.632929       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7137/hostexec-ip-172-20-38-136.us-east-2.compute.internal-pck28\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:21.684037       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-7913/hostpath-client\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:22.819803       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-8495/ss2-1\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:23.907422       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-3635/httpd-deployment-948b4c64c-qrllr\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:23.923304       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-3635/httpd-deployment-948b4c64c-8z295\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:24.093670       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-4508/hostexec-ip-172-20-38-136.us-east-2.compute.internal-rr92k\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:24.711469       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-3635/httpd-deployment-948b4c64c-pnq4k\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:25.054060       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-3635/httpd-deployment-8584777d8-4llxz\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:25.532613       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-8340/pod-2642db15-9f7e-41a3-94d9-a3af13fd0324\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:25.600362       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-8495/ss2-2\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:26.317966       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7137/pod-subpath-test-preprovisionedpv-4c8j\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:28.144354       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9524/hostexec-ip-172-20-55-216.us-east-2.compute.internal-xhv99\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:30.468044       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7871/hostexec-ip-172-20-57-184.us-east-2.compute.internal-j8jvp\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:31.416835       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-8889/hostexec-ip-172-20-57-184.us-east-2.compute.internal-s2r6j\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:31.950107       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-8495/ss2-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.041936       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-kr4b2\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.066540       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-phjmt\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.086674       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-nw77k\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.162608       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-lxczq\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.177474       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-c2pl5\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.177562       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-mzkph\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.177648       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-p9rhc\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.304955       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-pmmqs\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.321981       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-4n9sg\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.328793       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-cq6xj\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.328940       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-ct8nf\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.329045       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-6jwk9\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.329137       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-dd4pz\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.368431       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-rqrx9\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.368578       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-cbkpw\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.368677       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-nl6lr\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.368808       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-wm5vr\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.368905       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-2mxgj\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.369018       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-9fhsz\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.369111       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-wkgjv\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.369250       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-t5g67\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.369344       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-k7tt4\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.395594       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-g85bq\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.436153       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-ln42t\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.436644       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-rmxdx\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.436756       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-rwrjz\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.436862       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-8w6j4\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.436969       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-2zc5q\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.437058       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-6kktc\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.437141       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-7f955\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.437226       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-64l7z\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.460949       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-ztm8n\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.469313       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-vck2h\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.508185       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-zccwl\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.535613       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-lknhr\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.560742       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-xbg4n\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.636175       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-c6s4k\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.664766       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-t7wnk\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.699653       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-dbx89\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:32.756512       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-8097/cleanup40-8bf48fb9-1d2e-4746-be4d-900530a3f7ab-mjr8x\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:36.328516       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-7056/pod-ac79260f-7d7e-4281-aaac-85eac9c6b64c\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:37.041193       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-319/pod-271da21d-be23-4749-bb4b-fcde63b2c4db\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:40.543194       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-6187/pod-07c3b50b-2a88-4ee0-826b-fc917c868a80\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:40.953230       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9524/pod-subpath-test-preprovisionedpv-4l6x\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:41.023376       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-4508/local-injector\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:41.276377       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7871/pod-subpath-test-preprovisionedpv-4r4m\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:44.941253       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-721/hostexec-ip-172-20-55-216.us-east-2.compute.internal-r45xp\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:46.708213       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-6187/hostexec-ip-172-20-38-136.us-east-2.compute.internal-hn47h\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:46.924790       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-8495/ss2-1\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:47.196354       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-319/pvc-volume-tester-writer-ddctn\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:49.703626       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-8495/ss2-2\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:51.895694       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-721/pod-c4e3cbc4-67b2-41b3-a04b-13f80246c857\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:52.089174       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pvc-protection-543/pvc-tester-t8qdr\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:52.697282       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-8495/ss2-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:52.742342       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-4508/local-client\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:53.723147       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-8495/ss2-1\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:14:54.621288       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-721/pod-392d0b64-88cd-4e99-8a7f-f24039eb1505\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:55.483157       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-8019/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:55.523828       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-8019/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:55.553212       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-8019/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:55.589916       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-8019/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:14:56.242421       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-8889/pod-459cba31-b579-48e3-a36a-a43281f215ba\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:00.155043       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"cronjob-2286/replace-27091275--1-c7m7m\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:00.763118       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-5958/pod-logs-websocket-8ee4bff6-7a31-4683-94c8-d1340d78a7f6\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:02.273617       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-8163-8299/csi-hostpathplugin-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:02.616464       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-8495/ss2-2\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:04.019909       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9197-5945/csi-hostpathplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:04.403738       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-8889/hostexec-ip-172-20-57-184.us-east-2.compute.internal-hfntg\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:06.364172       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-4631/pod-projected-secrets-d430326c-c1b1-4955-af33-26bba027ea66\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:08.584374       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-8163/pod-8f7dc060-c9e5-48bf-ad81-9da4c8f7c9c7\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:08.590760       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-5647/hostexec-ip-172-20-38-136.us-east-2.compute.internal-44cs5\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:09.374960       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1949/hostexec-ip-172-20-38-136.us-east-2.compute.internal-54xtt\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:11.067481       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4566-7269/csi-mockplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:11.100024       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4566-7269/csi-mockplugin-attacher-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:11.131533       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4566-7269/csi-mockplugin-resizer-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:12.150642       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-1572/pod-configmaps-23591814-5555-4d58-98f2-483d31453ef3\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:12.764011       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-test-2101/busybox-privileged-true-8e92f0c7-5b9a-4f2f-886c-d075d22ad280\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:13.942676       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1949/pod-4a0acf71-21fd-4ef0-9b2d-723c5b3277c9\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:14.578860       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"clientset-3697/podefd39125-4a3a-4639-a718-f2792d9ff28b\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:15.215906       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-4458/hostexec-ip-172-20-52-221.us-east-2.compute.internal-kps5x\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:17.278815       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-7207/labelsupdatedbf46047-685c-4878-8131-b5243ee071dc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:17.444923       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1413/hostexec-ip-172-20-55-216.us-east-2.compute.internal-p6hzn\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:17.900710       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-8019/test-container-pod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:19.738202       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-9061/pod-handle-http-request\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:20.389038       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-9773/simpletest.rc-pwxnt\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:20.418400       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-9773/simpletest.rc-llspw\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:21.865288       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-9061/pod-with-prestop-exec-hook\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:21.975591       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1413/pod-ae7541e7-e75c-46df-b23c-f0ba2eb28030\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:22.359388       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-6322/kube-proxy-mode-detector\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:22.513949       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9197/pod-subpath-test-dynamicpv-pzds\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:22.949044       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-8163/pod-f357d63f-274a-48b4-9ed9-04a811ce7556\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:24.441479       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-811/externalname-service-fflnc\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:24.471949       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-811/externalname-service-m9cn9\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:25.104280       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-6322/echo-sourceip\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:25.867492       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-4458/pod-dd9a3f28-6592-4638-95d4-99775fd6f38c\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:26.246357       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-5647/exec-volume-test-preprovisionedpv-f5xl\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:27.736710       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4566/pvc-volume-tester-jrgxh\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:27.912500       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replicaset-7167/pod-adoption-release\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:28.021139       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-4458/hostexec-ip-172-20-52-221.us-east-2.compute.internal-5lbrw\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:29.332714       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-6322/pause-pod-67964f89c8-xl8bb\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:29.350647       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-6322/pause-pod-67964f89c8-k55wz\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI0705 09:15:30.485820       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-811/execpodkk2vt\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:32.413120       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-4056/inline-volume-g5vtt\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-g5vtt-my-volume\\\" not found.\"\nI0705 09:15:34.082898       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-2093/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:34.112604       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-2093/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:34.143424       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-2093/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:34.173177       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-2093/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:36.026442       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-4056-7864/csi-hostpathplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:36.103375       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-4056/inline-volume-tester-qlckb\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-qlckb-my-volume-0\\\" not found.\"\nI0705 09:15:36.165867       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replicaset-7167/pod-adoption-release-fmlpr\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:39.873334       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-4056/inline-volume-tester-qlckb\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:41.382447       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-5987/hostexec-ip-172-20-55-216.us-east-2.compute.internal-vlc7n\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:42.858207       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5420-9219/csi-mockplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:42.917004       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5420-9219/csi-mockplugin-attacher-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:44.698691       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-9393/aws-injector\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:47.601950       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"mount-propagation-6520/hostexec-ip-172-20-55-216.us-east-2.compute.internal-965j6\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:50.240418       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5420/pvc-volume-tester-z4fnn\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:54.253785       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-5442/pod-configmaps-3ea3c5d0-ad40-4dba-9aae-075dfde4cc51\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:54.466463       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-2093/test-container-pod\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:15:56.149107       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-5987/pod-002c4f12-ab9f-465f-ac5c-da4198ef1619\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:56.924252       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3186/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:56.949809       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3186/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:56.978778       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3186/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:57.009630       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3186/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:57.872284       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4565-4499/csi-mockplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:15:58.315257       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-5987/hostexec-ip-172-20-55-216.us-east-2.compute.internal-q7tmf\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:00.130616       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"cronjob-9922/concurrent-27091276--1-7cldk\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:00.998902       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2428/pod-subpath-test-inlinevolume-826c\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:03.218623       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4565/pvc-volume-tester-w44xd\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:07.606975       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-6437/security-context-ece23e58-e389-4bb0-b6d3-ff74aee7318a\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:11.156443       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-runtime-89/termination-message-container286c1337-a941-429a-89de-1f12a13abd84\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:13.591324       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"containers-9370/client-containers-16c6626a-05b5-439b-b4c1-3b80fbcc9cab\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:13.955909       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-3467/downwardapi-volume-32855432-29a3-4acd-abf7-f2d93f178cf1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:14.055143       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7579/hostexec-ip-172-20-57-184.us-east-2.compute.internal-txlfb\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:16.600881       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7579/pod-b4930f2c-6284-428c-8d79-386888dd8c32\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:17.188007       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2844-599/csi-mockplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:17.219685       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2844-599/csi-mockplugin-attacher-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:17.301170       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pod-network-test-3186/test-container-pod\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:18.241530       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-1144/pod-b1cb11bd-9d83-4fc0-acdb-f10b03c4e4fc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:18.468408       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-test-4949/alpine-nnp-false-cbc2bf19-05b5-415f-8d74-bba0459150a9\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:18.747569       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4840/pod-subpath-test-inlinevolume-mq97\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:20.768906       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-6033/deployment-shared-unset-55bfccbb6c-mtsc7\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:20.785530       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-6033/deployment-shared-unset-55bfccbb6c-sdfp7\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:20.790828       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-6033/deployment-shared-unset-55bfccbb6c-sb5tg\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:21.278821       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-6633/hostexec-ip-172-20-55-216.us-east-2.compute.internal-kq5hz\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:22.288812       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-716-6236/csi-hostpathplugin-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:23.370775       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-4823/downwardapi-volume-21fcc943-5e2f-46fb-ad3a-e2f1fe7b1ed1\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:24.543069       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2844/pvc-volume-tester-vs8h6\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:24.552237       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-716/pod-subpath-test-dynamicpv-9rsb\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:25.903479       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-6633/local-injector\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:25.919958       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4133/hostexec-ip-172-20-38-136.us-east-2.compute.internal-xdfl4\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:26.471575       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-6989/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0705 09:16:28.469088       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4133/pod-ff43b062-86a7-41d0-a866-1f04c70e56c1\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0705 09:16:29.751252       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-test-194/explicit-nonroot-uid\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:29.902744       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4133/pod-ff43b062-86a7-41d0-a866-1f04c70e56c1\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-cdk2z\\\" not found.\"\nE0705 09:16:29.905957       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-ff43b062-86a7-41d0-a866-1f04c70e56c1.168eda58abde9b15\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-4133\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-4133\", Name:\"pod-ff43b062-86a7-41d0-a866-1f04c70e56c1\", UID:\"77af6ec2-315c-4d81-9147-19b95bfdc568\", APIVersion:\"v1\", ResourceVersion:\"24966\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-cdk2z\\\" not found.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc030d15b75cf9915, ext:1057910810278, loc:(*time.Location)(0x320d400)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc030d15b75cf9915, ext:1057910810278, loc:(*time.Location)(0x320d400)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-ff43b062-86a7-41d0-a866-1f04c70e56c1.168eda58abde9b15\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-4133 because it is being terminated' (will not retry!)\nI0705 09:16:31.903828       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4133/pod-ff43b062-86a7-41d0-a866-1f04c70e56c1\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-cdk2z\\\" not found.\"\nE0705 09:16:31.908453       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-ff43b062-86a7-41d0-a866-1f04c70e56c1.168eda58abde9b15\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-4133\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-4133\", Name:\"pod-ff43b062-86a7-41d0-a866-1f04c70e56c1\", UID:\"77af6ec2-315c-4d81-9147-19b95bfdc568\", APIVersion:\"v1\", ResourceVersion:\"25004\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-cdk2z\\\" not found.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc030d15b75cf9915, ext:1057910810278, loc:(*time.Location)(0x320d400)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc030d15bf5e282a8, ext:1059912049721, loc:(*time.Location)(0x320d400)}}, Count:2, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-ff43b062-86a7-41d0-a866-1f04c70e56c1.168eda58abde9b15\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-4133 because it is being terminated' (will not retry!)\nI0705 09:16:32.149040       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-5204/pod-projected-configmaps-de3d3e8e-71c7-4d53-9823-c8fd370da97b\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:32.635326       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-6989/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0705 09:16:34.061194       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2203/test-new-deployment-847dcfb7fb-hr5t6\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:36.859934       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7211/pod-subpath-test-inlinevolume-qdnx\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:36.925309       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-1684/downwardapi-volume-eef92e20-b225-48d9-a76d-cda48bf05fae\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:39.605924       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-4751/sample-webhook-deployment-78988fc6cd-phwwk\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:40.772257       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9787-1019/csi-hostpathplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:40.950099       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-9919/pod-c0c1c732-7652-4c7d-a134-bcc37c477e73\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:41.100520       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-6268/ss-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:43.093125       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-9919/hostexec-ip-172-20-52-221.us-east-2.compute.internal-ppktd\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:43.462228       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-6633/local-client\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:43.996508       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-673/httpd\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:46.935191       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-7852/hostexec-ip-172-20-52-221.us-east-2.compute.internal-sm6xk\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:47.039774       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9787/pod-subpath-test-dynamicpv-vrvp\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:53.134183       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pv-6870/nfs-server\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:54.342810       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-673/run-log-test\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:16:55.523402       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-7852/local-injector\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:16:56.817917       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3169/hostexec-ip-172-20-52-221.us-east-2.compute.internal-r7mw5\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:08.243602       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7447/hostexec-ip-172-20-55-216.us-east-2.compute.internal-m96qz\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:09.847503       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pv-6870/pvc-tester-ntdbh\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:11.073581       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-7852/local-client\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:11.380693       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9540/server-envvars-d40dbcd5-975e-4274-a157-76bf62fc5853\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:11.539357       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3169/pod-subpath-test-preprovisionedpv-6f72\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:11.764069       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-3395/sample-webhook-deployment-78988fc6cd-w5g2j\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:13.543237       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9540/client-envvars-02721a28-6859-432f-8060-59f9489f3505\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:15.093739       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-6041/httpd\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:15.644330       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7047/inline-volume-h8qcn\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-h8qcn-my-volume\\\" not found.\"\nI0705 09:17:17.190382       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-7047-2316/csi-hostpathplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:17.257888       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7047/inline-volume-tester-2c6kp\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-2c6kp-my-volume-0\\\" not found.\"\nI0705 09:17:18.574220       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3475/pod-subpath-test-inlinevolume-v4g8\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:18.933595       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7047/inline-volume-tester-2c6kp\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0705 09:17:20.732898       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-8574/pod-handle-http-request\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:20.939373       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-7047/inline-volume-tester-2c6kp\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:22.859025       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-8574/pod-with-poststart-exec-hook\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:23.142696       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-8433/liveness-df3a6980-88a4-4f9a-9f5a-e217ccfd3ec3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:24.504098       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-9448/adopt-release--1-xk96v\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:24.513035       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-9448/adopt-release--1-5488c\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:25.810727       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4478/hostexec-ip-172-20-38-136.us-east-2.compute.internal-z2vhc\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:26.530231       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-1508/pod-projected-configmaps-5422ea1e-22ed-4c22-986c-b2f9097f3f9b\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:27.715460       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-9448/adopt-release--1-7fvqn\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:28.028698       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-731/hostexec-ip-172-20-38-136.us-east-2.compute.internal-pk6fg\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:29.527781       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-2238/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:29.557885       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-2238/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:29.590820       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-2238/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:29.621865       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-2238/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:30.480776       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-1022-8521/csi-hostpathplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:33.211432       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-1188/pod-058c2d13-5d67-4ca9-8bfd-b637a0f560f0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:34.741063       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-1022/pod-01e1dc76-98fb-47cd-8d3b-6ea817ad3aa6\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:35.833804       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-1398/ss-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:37.316021       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2134/hostexec-ip-172-20-57-184.us-east-2.compute.internal-75nzt\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:41.362139       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4478/pod-subpath-test-preprovisionedpv-xtnh\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:41.506932       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-731/pod-subpath-test-preprovisionedpv-tc54\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:46.007900       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-1398/ss-1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:47.306708       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9924/hostexec-ip-172-20-55-216.us-east-2.compute.internal-rdv2r\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:47.427454       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-3561/downward-api-9c2d49b0-3ec4-4bce-b1d2-5b7a290f1317\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:49.607144       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-552/hostexec-ip-172-20-57-184.us-east-2.compute.internal-wnc92\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:49.881160       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-6596/downwardapi-volume-d65c1228-33fd-41dd-9005-f2d7ae61db92\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:49.935678       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-2238/test-container-pod\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:52.141801       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-552/pod-0115029f-c2b8-474b-8cae-6b4fdfc6f8d4\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:52.414961       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-3048/pod-configmaps-09086d1e-32de-4707-8dcc-47febe9bfa16\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:52.799718       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4924/rs-xtqw7\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:52.803463       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4924/rs-dgh4f\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:52.817930       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4924/rs-dt8xj\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:52.821568       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4924/rs-7xbxg\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:52.835826       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4924/rs-w8xl6\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:52.836044       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4924/rs-6bcxp\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:52.841306       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4924/rs-5wwkc\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:52.874113       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4924/rs-l4l9d\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:52.879820       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4924/rs-s8mgw\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:52.893030       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4924/rs-bnlfj\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:53.269350       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-test-2511/implicit-nonroot-uid\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:53.821178       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-5954/pod-subpath-test-dynamicpv-z9m4\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:54.904443       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-552/pod-df012ff1-c460-4fd3-bf50-450a1a9142ea\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:54.920946       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-7477/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:54.948913       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-7477/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:54.977573       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-7477/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:55.011005       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-7477/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:55.920438       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9924/pod-subpath-test-preprovisionedpv-5sjd\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:56.096597       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2134/pod-subpath-test-preprovisionedpv-pz5d\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:17:56.472538       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-9965/aws-injector\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:56.819028       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-6268/ss-1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:59.054243       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4924/rs-hfb7r\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:17:59.373423       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9807/hostexec-ip-172-20-52-221.us-east-2.compute.internal-vlbn9\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:01.953254       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9807/pod-f8bc20d1-71d6-4873-8c46-85e4f90ef491\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:02.080552       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5750/hostexec-ip-172-20-57-184.us-east-2.compute.internal-zbccq\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:05.200690       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-4462/hostexec-ip-172-20-55-216.us-east-2.compute.internal-472n8\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:05.461077       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-3154/pod-2c1e4ceb-54df-489f-8e02-0a905b4a4224\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:05.578213       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4759/hostexec-ip-172-20-38-136.us-east-2.compute.internal-77pxf\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:06.631080       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5750/pod-4c3ac7b2-4eb9-432e-8ea4-3e05d31d1812\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:09.936498       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"dns-7212/dns-test-cddb7756-6ed5-4245-a5d8-69d10dc980b4\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:12.333021       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-9393/aws-client\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:13.326303       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5750/pod-3eb55736-e3da-442e-97d4-c75f556b76fa\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:16.995254       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-2078/agnhost-primary-767vh\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:21.377260       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-7477/test-container-pod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:22.354453       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-3649/test-webserver-21f0ebce-a6c4-40aa-bff9-44ae9c85c480\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:23.259123       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-9733/hostexec-ip-172-20-52-221.us-east-2.compute.internal-85ddw\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:23.487467       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-5209/pod-handle-http-request\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:24.295044       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-9257/pod-projected-secrets-2355a619-2dca-4a4d-af7a-1f01f0e7f147\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:25.620465       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-5209/pod-with-prestop-http-hook\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:26.340937       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-4462/pod-8af54932-b38d-4e01-8971-a5746bf0c2b3\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:26.742434       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4759/pod-subpath-test-preprovisionedpv-nww2\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:26.793326       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-1998/hostexec-ip-172-20-55-216.us-east-2.compute.internal-7m7g5\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:28.491995       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-4462/hostexec-ip-172-20-55-216.us-east-2.compute.internal-m5vqq\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:32.210795       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-9515/downward-api-992b5cfc-910a-4712-b52e-86b6673a8059\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:34.702783       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"var-expansion-6350/var-expansion-51a673d2-e370-464f-b3c0-611f8cc175b1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:37.224914       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-8019/configmap-client\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:40.357932       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-9733/pod-e2f57df8-8185-40ce-8a9b-0188a9b951ff\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:41.640173       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-1998/pod-d7e3352a-a97e-4862-927d-734375a77d4f\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:42.511287       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-9733/hostexec-ip-172-20-52-221.us-east-2.compute.internal-27p4m\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:43.781706       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-1998/hostexec-ip-172-20-55-216.us-east-2.compute.internal-zt9b6\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:44.276221       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-5984/pod-projected-configmaps-2a539922-1715-4e84-abf8-5bf654531b7b\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:45.125656       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1015/hostexec-ip-172-20-55-216.us-east-2.compute.internal-blsjr\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:50.196629       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7284-3736/csi-mockplugin-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:51.041213       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"events-5829/send-events-9ca0a785-a4c1-4774-ace3-73374a3f8d7c\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:51.511548       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"dns-3958/dns-test-aafde6ad-2382-4f7f-ad29-f666ba1fb61e\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:52.755747       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-6793/downward-api-12fc3fe4-672c-4f61-86f7-3df509539693\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:53.383418       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9308/pod-subpath-test-inlinevolume-dn5k\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:53.613062       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-877/hostexec-ip-172-20-38-136.us-east-2.compute.internal-t8x9g\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:55.661952       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-8967/exec-volume-test-inlinevolume-mhlr\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:18:56.457072       1 volume_binding.go:316] \"Failed to bind volumes for pod\" pod=\"csi-mock-volumes-7284/pvc-volume-tester-hffd5\" err=\"binding volumes: provisioning failed for PVC \\\"pvc-lklll\\\"\"\nE0705 09:18:56.457122       1 framework.go:863] \"Failed running PreBind plugin\" err=\"binding volumes: provisioning failed for PVC \\\"pvc-lklll\\\"\" plugin=\"VolumeBinding\" pod=\"csi-mock-volumes-7284/pvc-volume-tester-hffd5\"\nE0705 09:18:56.457204       1 factory.go:398] \"Error scheduling pod; retrying\" err=\"running PreBind plugin \\\"VolumeBinding\\\": binding volumes: provisioning failed for PVC \\\"pvc-lklll\\\"\" pod=\"csi-mock-volumes-7284/pvc-volume-tester-hffd5\"\nI0705 09:18:56.600101       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1015/pod-subpath-test-preprovisionedpv-ljgm\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:57.552741       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8971/hostexec-ip-172-20-55-216.us-east-2.compute.internal-fdfgv\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:57.960927       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-6634/hostexec-ip-172-20-52-221.us-east-2.compute.internal-nvhx5\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:18:59.004081       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7284/pvc-volume-tester-hffd5\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:01.246424       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-319/pvc-volume-tester-reader-bx7sf\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:10.403734       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-877/pod-subpath-test-preprovisionedpv-7rc9\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:11.092305       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8971/pod-subpath-test-preprovisionedpv-9hxc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:11.467137       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-6634/pod-faf975af-92b9-4231-82cd-f1a8fbcd2e5c\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:13.613652       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-6634/hostexec-ip-172-20-52-221.us-east-2.compute.internal-9v69z\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:14.769587       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-test-5036/bin-false67bce24c-53d7-49c6-addc-f11d761e8234\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:16.708983       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-2302/pod-70e82653-3d0d-496d-9be8-954d345106ed\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:16.902566       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-8380/pod-6788829f-b3e9-4a8d-bf41-b879a8a5b85b\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:19.157320       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-66/pod-subpath-test-inlinevolume-n94m\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:19.387186       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pv-4382/nfs-server\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:23.276281       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-9656/pod-1e573477-ca72-4490-8e8d-f9e9ebea2286\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:23.703101       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4955/hostexec-ip-172-20-52-221.us-east-2.compute.internal-9tztc\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:24.170385       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1242/hostexec-ip-172-20-38-136.us-east-2.compute.internal-vrs8w\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:26.180345       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pv-4382/pvc-tester-zklc5\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:27.054911       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1242/pod-3c7d401d-075f-4879-9df7-4a32b5257341\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:29.713875       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1242/pod-a6cb2b59-e8a9-4d92-8307-f80a655fdf5b\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:29.782116       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"var-expansion-1022/var-expansion-2614eb86-2e74-4527-8059-5ba1a9260d8b\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:33.831602       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-9404/pod-5455eaa7-1e67-41c9-b4c7-09c3245bae6c\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:33.910902       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2717/pod-subpath-test-dynamicpv-6z6d\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:34.361045       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-275/hostexec-ip-172-20-57-184.us-east-2.compute.internal-qqbnn\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:36.581987       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-test-2426/busybox-readonly-false-caf66ea8-ea46-4ed1-b66e-84f43f27c5b0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:37.156608       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4776-986/csi-mockplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:37.208821       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4776-986/csi-mockplugin-attacher-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:38.816059       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replication-controller-9742/my-hostname-basic-aa9693bc-9352-40d2-a6a8-50eedd36f2e1-tl2hf\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:39.078630       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-3860/pod-84a3b145-4577-4693-a01a-2d778f193da3\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:39.129840       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-6301/test-rollover-controller-prp54\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:39.556080       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-4516/sample-webhook-deployment-78988fc6cd-5w878\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:40.754348       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4955/pod-subpath-test-preprovisionedpv-9rq5\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:40.980233       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-275/pod-subpath-test-preprovisionedpv-vpkt\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:42.499580       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4776/pvc-volume-tester-hsrn6\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:43.309639       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-6301/test-rollover-deployment-78bc8b888c-7l7x5\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:43.784924       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-6301/test-rollover-deployment-98c5f4599-mh7z5\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:46.482338       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-1118/busybox-ec3fb7e3-1875-46c7-a009-fd3a5b8c01e0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:47.304015       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-2182-8958/csi-hostpathplugin-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:47.346670       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-2182/inline-volume-tester-prwj7\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:19:51.872950       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-5700/downwardapi-volume-2f96eeb8-d88e-4f5d-ab96-9c9af1226596\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:52.460309       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-9404/pod-229fb573-09e8-44fe-8b79-ccbfe7bd5f5c\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:19:54.406799       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"subpath-2230/pod-subpath-test-configmap-zjks\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:04.704355       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7097-5231/csi-hostpathplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:20:04.847725       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-7097/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0705 09:20:06.040682       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-7097/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0705 09:20:07.424009       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-4509/pod-configmaps-cc800d94-e320-4ca2-bc3a-fab9bcc998ca\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:08.047432       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7097/hostpath-injector\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:20:17.633907       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-4871/foo--1-c7x84\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:17.647099       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-4871/foo--1-wkj6k\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:24.517632       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-7097/hostpath-client\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0705 09:20:26.056879       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7097/hostpath-client\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:20:30.872248       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-3867/pod-handle-http-request\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:32.993904       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-3867/pod-with-poststart-http-hook\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:34.725068       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"hostpath-9593/pod-host-path-test\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:36.943033       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-test-7757/busybox-user-0-83a97314-23e2-4368-818b-99c955272139\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:37.773348       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-4986/pod1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:38.298836       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1691-7252/csi-mockplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:20:38.357811       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1691-7252/csi-mockplugin-attacher-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:20:39.319680       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-3695/test-pod\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:39.888296       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"tables-4885/pod-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:40.000058       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-4986/pod2\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:42.259520       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-4986/execpodh4nxf\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:43.636857       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1691/pvc-volume-tester-hrk6j\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:20:47.223726       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"secrets-1045/pod-secrets-7f8dcec1-2ad1-4c0f-97ea-69559bb63367\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:47.843157       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:20:49.716041       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4855/hostexec-ip-172-20-38-136.us-east-2.compute.internal-lnw7c\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:20:56.311645       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4855/pod-subpath-test-preprovisionedpv-8w4t\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:01.308758       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-2861/pod-projected-configmaps-53551303-d574-49ef-9571-b26e3b11fb38\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:01.729057       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1691/inline-volume-fsb65\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:03.108008       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-833/hostpathsymlink-injector\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:14.431535       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-8006/pod-submit-remove-c7e6007f-1efa-4ba4-8bc8-d76f6f9667c9\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:22.800077       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-833/hostpathsymlink-client\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:27.780785       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-9429/update-demo-nautilus-cvm46\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:27.788030       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-9429/update-demo-nautilus-shl67\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:29.064458       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-5513/pod-subpath-test-inlinevolume-6hfc\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:35.353330       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-runtime-2188/image-pull-test87f95312-4620-4f0e-97c7-5a21831e92b0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:35.824487       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-4819/service-headless-82dvr\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:35.824688       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-4819/service-headless-wzj8p\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:35.825551       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-4819/service-headless-mhp2d\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:38.940595       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-4819/service-headless-toggled-tgw97\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:38.958115       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-4819/service-headless-toggled-tvsrv\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:38.970768       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-4819/service-headless-toggled-rv7pz\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:39.932397       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-5297/pod-a2029660-be21-4250-b94d-33dc4f63ae6f\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:42.046581       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-4819/verify-service-up-host-exec-pod\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:42.437484       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-6688/pod-always-succeed8482d36c-0d46-47f0-bcd2-ed44de2cb297\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:44.133466       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-4819/verify-service-up-exec-pod-bwndv\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:44.367481       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-583/hostexec-ip-172-20-38-136.us-east-2.compute.internal-2lsmz\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:44.651810       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1682-8179/csi-mockplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:47.850652       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3928/hostexec-ip-172-20-52-221.us-east-2.compute.internal-28pjv\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:50.421191       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3928/pod-dd16ccaf-71ca-47bb-9581-acb3c5d0ae85\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:54.289659       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-4248/pod-projected-configmaps-a5c5010f-9001-47a1-ac7c-d9a22e3a02ed\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:21:55.053374       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-583/pod-subpath-test-preprovisionedpv-skgz\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:56.823613       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-5792/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:56.854076       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-5792/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:56.885846       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-5792/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:56.918168       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-5792/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:21:57.016861       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1682/pvc-volume-tester-nv222\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:00.010069       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-747/hostexec-ip-172-20-55-216.us-east-2.compute.internal-cg8jn\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:05.256576       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-1233/agnhost-primary-vtm4s\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:08.267129       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-2102/downwardapi-volume-408b8a7c-57ed-4cb2-a988-1461696ba2fc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:10.720091       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-747/pod-subpath-test-preprovisionedpv-qvvw\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:10.959894       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-3048/httpd\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:17.661248       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-5279/test-pod\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:19.263056       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-5792/test-container-pod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:19.294442       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-5792/host-test-container-pod\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:19.704023       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7675-6082/csi-mockplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:21.803668       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-test-5399/alpine-nnp-true-5b686b1b-14d9-4f5b-828e-14c1b2aa2493\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:22.451169       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-9965/aws-client\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:23.712587       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-6388/pod-subpath-test-inlinevolume-wsh4\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:24.959836       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-5561/pod-80c31d2f-68cb-42df-b278-1bc6a6bc9547\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:25.020485       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7675/pvc-volume-tester-649sn\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:26.273374       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1472/hostexec-ip-172-20-55-216.us-east-2.compute.internal-cpglf\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:27.110254       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-5561/hostexec-ip-172-20-57-184.us-east-2.compute.internal-5r5tj\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:28.271184       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-976/hostexec-ip-172-20-57-184.us-east-2.compute.internal-m5fvb\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:29.660819       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1472/pod-cb09e4c2-5df2-46e7-bda7-dc6e4820f7a5\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:32.370838       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1472/pod-97e158f9-2110-4f93-acd2-aad68db9f104\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:33.124636       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3257/hostexec-ip-172-20-52-221.us-east-2.compute.internal-5h2db\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:40.950595       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-976/exec-volume-test-preprovisionedpv-ffp9\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:41.760723       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3257/pod-subpath-test-preprovisionedpv-cjrk\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:43.836708       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-2119/pod-test\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:44.748959       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8601-3978/csi-mockplugin-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:44.812346       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8601-3978/csi-mockplugin-attacher-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:46.043839       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3257/pod-subpath-test-preprovisionedpv-cjrk\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:48.910454       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-6268/ss-2\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:49.356280       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-4222/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:49.386057       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-4222/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:49.427610       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-4222/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:49.455976       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-4222/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:52.436543       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-1747/liveness-1c9a3b21-4016-4a01-9a53-6490485fd8c2\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:22:56.093391       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8601/pvc-volume-tester-cpnpg\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:22:58.579748       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"hostpath-5778/pod-host-path-test\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:00.123647       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"cronjob-6164/successful-jobs-history-limit-27091283--1-g9q5n\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:01.012454       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"var-expansion-2301/var-expansion-20799e9c-30ec-4cf4-9bde-8db2d7d8ddbb\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:04.828964       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-4425-6537/csi-hostpathplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:07.038804       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-4425/pod-6537f605-9ac9-4296-b395-ef083de39b05\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:09.185422       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-4425/hostexec-ip-172-20-52-221.us-east-2.compute.internal-klb4j\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:09.771343       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-4222/test-container-pod\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:09.801491       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-4222/host-test-container-pod\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:10.529953       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-6238/pod-subpath-test-dynamicpv-4zhd\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:12.534002       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1616/hostexec-ip-172-20-55-216.us-east-2.compute.internal-c466b\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:12.615358       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-5/hostexec-ip-172-20-52-221.us-east-2.compute.internal-vtv22\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:13.163349       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7233/hostexec-ip-172-20-57-184.us-east-2.compute.internal-ws2v2\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:14.747495       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-test-6327/busybox-user-65534-e63b487b-3665-484f-bf02-b2f95e961280\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:15.966770       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-runtime-1239/image-pull-test68bc11aa-0c56-4355-a663-34f7e9b4a111\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:16.809566       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-2546/pod-projected-secrets-9dd623fa-2de6-41f2-9ddc-0bd78d717069\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:17.395936       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-900/hostexec-ip-172-20-52-221.us-east-2.compute.internal-7psfm\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:18.265067       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2177-598/csi-mockplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:18.298147       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2177-598/csi-mockplugin-attacher-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:18.331491       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2177-598/csi-mockplugin-resizer-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:19.297624       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-196/hostexec-ip-172-20-55-216.us-east-2.compute.internal-2r9lf\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:23.055307       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-3262/httpd\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:23.597733       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2177/pvc-volume-tester-j85bm\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:25.282577       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-5/pod-subpath-test-preprovisionedpv-nj7j\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:25.321318       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1616/pod-subpath-test-preprovisionedpv-tk2j\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:26.029077       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-900/pod-subpath-test-preprovisionedpv-jbns\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:26.207495       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-196/pod-subpath-test-preprovisionedpv-89xq\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:26.287620       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7233/pod-subpath-test-preprovisionedpv-gnk2\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:31.189709       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-1011/e2e-test-httpd-pod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:31.589218       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-5/pod-subpath-test-preprovisionedpv-nj7j\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:33.403375       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-3262/failure-1\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:33.623667       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"secrets-2573/pod-secrets-ed19adee-f43c-4ea2-bf8a-8ad50a4acdbc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:34.459364       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-5855/pod-subpath-test-inlinevolume-fcms\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:35.295792       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"secrets-1730/pod-secrets-25b9564f-e4dd-411d-bc84-75e44469dd5f\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:36.107917       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-runtime-9497/termination-message-containerce71abb5-2164-453b-afef-2e7098bb0771\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:36.637725       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-1409/e2e-test-httpd-pod\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:37.561440       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-1576/inline-volume-tkdg4\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tkdg4-my-volume\\\" not found.\"\nI0705 09:23:38.900877       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-7437/sample-webhook-deployment-78988fc6cd-sk569\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:41.215000       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-1576-6258/csi-hostpathplugin-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:41.294489       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-1576/inline-volume-tester-x27rb\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-x27rb-my-volume-0\\\" not found.\"\nI0705 09:23:42.515367       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-5228/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:42.552046       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-5228/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:42.611202       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-5228/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:42.630986       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-5228/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:44.205439       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-1576/inline-volume-tester-x27rb\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:23:47.604241       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-3912/simpletest.deployment-9858f564d-v8j9s\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:47.619830       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-3912/simpletest.deployment-9858f564d-dw65d\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:50.484759       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-6740/exec-volume-test-inlinevolume-xhxc\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:54.416020       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-3033/ss-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:23:55.483967       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-7726/aws-injector\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:00.131495       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"cronjob-6164/successful-jobs-history-limit-27091284--1-6kt6l\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:00.140490       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"cronjob-6008/forbid-27091284--1-bfcw9\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:02.943677       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-5228/test-container-pod\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:04.526934       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-6685/pod-8f7a2550-3c69-4569-8d31-55cbda597be7\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:07.047355       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4661/hostexec-ip-172-20-38-136.us-east-2.compute.internal-88znq\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:24:11.620960       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4661/pod-subpath-test-preprovisionedpv-7qxq\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:24:36.583017       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-6268/ss-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:36.905850       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-311/hairpin\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:46.682350       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-5751/simpletest-rc-to-be-deleted-lx659\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:46.689651       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-5751/simpletest-rc-to-be-deleted-q2c6t\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:46.706133       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-5751/simpletest-rc-to-be-deleted-skf2r\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:46.725283       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-5751/simpletest-rc-to-be-deleted-sd8ns\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:46.725608       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-5751/simpletest-rc-to-be-deleted-plchs\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:46.726284       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-5751/simpletest-rc-to-be-deleted-wmlv7\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:46.726333       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-5751/simpletest-rc-to-be-deleted-pdvbl\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:46.746321       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-5751/simpletest-rc-to-be-deleted-fc96w\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:46.749011       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-5751/simpletest-rc-to-be-deleted-ghxnx\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:46.751327       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-5751/simpletest-rc-to-be-deleted-9chzs\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:24:58.337061       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4139/hostexec-ip-172-20-55-216.us-east-2.compute.internal-ccljc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:25:00.866123       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4139/pod-aa7d1e08-ceed-4b5b-8b63-ee43267facf2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:25:03.597037       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4139/pod-7d050e76-c06d-4eb7-afb9-c47ef1f0c882\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:25:06.918391       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-pf9nq\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:06.940223       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-td86p\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:06.947752       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-jnq6s\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:06.950340       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-2mgqk\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:06.954647       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-jqdck\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:06.955683       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-bq754\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:06.955994       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-cgcld\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:06.976778       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-bccvd\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:06.988216       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-hrd2j\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:06.991996       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-8jxqf\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:09.682319       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"container-probe-3989/busybox-6f1de737-e40a-4c13-8dea-1a265377fd65\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:11.204113       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-mz68f\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:11.228007       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-wd5s2\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:11.228132       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-vnc4w\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:11.284026       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-6xq62\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:11.292162       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-tzl95\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.572532       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-qnpzf\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.593409       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-6gk2p\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.604424       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-nphht\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.605022       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-frpln\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.605282       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-98nrf\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.623146       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-glh9f\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.647131       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-774wn\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.654661       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-c5lp8\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.677353       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-ctz7j\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.680388       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-8j7zk\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.681504       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-c67hx\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.698260       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-dph24\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.711487       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-rgfbz\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.712259       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-w5hwn\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.712529       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-vq5dt\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.715802       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-795d758f88-7f6tc\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.716261       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-pgcpv\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.716261       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-q64nk\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.716311       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-88w4h\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:13.716378       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-2126/webserver-deployment-847dcfb7fb-knjqn\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:14.173907       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-8370/service-proxy-disabled-qh8np\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:14.184877       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-8370/service-proxy-disabled-6rx9l\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:14.199793       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-8370/service-proxy-disabled-q7hff\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:19.710561       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-4661/update-demo-nautilus-4mfr9\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:19.716393       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-4661/update-demo-nautilus-tfthq\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:20.332417       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-8370/service-proxy-toggled-fg224\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:20.342570       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-8370/service-proxy-toggled-5s9dw\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:20.348541       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-8370/service-proxy-toggled-lz65f\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:23.430155       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-8370/verify-service-up-host-exec-pod\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:25.522603       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-8370/verify-service-up-exec-pod-bvk6h\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:35.060793       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-8800/httpd\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:44.289194       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replication-controller-7051/rc-test-l2tpg\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:44.360864       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-5129/pod-subpath-test-dynamicpv-nqpn\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:44.465337       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-4388/exec-volume-test-dynamicpv-f9dk\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:44.926394       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-6875/pod-99ddfac1-9ed9-4398-9ecd-d004cd4bd780\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:45.538966       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replication-controller-7051/rc-test-lk56v\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:46.521155       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1318/pod-subpath-test-dynamicpv-dc9x\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:46.938850       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2136/hostexec-ip-172-20-55-216.us-east-2.compute.internal-jpjbh\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:25:47.632232       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-9191/httpd\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:48.298844       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8412/pod-subpath-test-dynamicpv-49h8\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:55.575334       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2136/pod-subpath-test-preprovisionedpv-jldd\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:25:59.099678       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"port-forwarding-921/pfpod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:25:59.851060       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2136/pod-subpath-test-preprovisionedpv-jldd\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:02.907442       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-7619/pod-698e7553-89d7-49d5-8448-e22fa04e657a\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:03.739286       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-5571/sample-webhook-deployment-78988fc6cd-z2jsh\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:05.240861       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-5129/pod-subpath-test-dynamicpv-nqpn\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:05.418994       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8958/hostexec-ip-172-20-52-221.us-east-2.compute.internal-bfg6j\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:08.547351       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-2957/backofflimit--1-vplsq\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:09.678700       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-2698-3140/csi-hostpathplugin-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:10.009425       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-2957/backofflimit--1-fkpwz\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:11.620045       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6651-9630/csi-mockplugin-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:11.659423       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6651-9630/csi-mockplugin-attacher-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:12.428329       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-9281/indexed-job-0-qr6hc\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:12.438382       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-9281/indexed-job-1-kjhcc\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:12.503740       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-9998/pod-628e2ff0-e87d-4229-9894-7293e8a576b4\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:13.506224       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-6875/pod-0baec778-de85-49b5-a23e-25673b225e05\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:13.860433       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-9281/indexed-job-2-lqpbp\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:13.928901       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-2698/pod-d268c9c8-fee4-4a29-9f79-9289e74109dc\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:14.247764       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"job-9281/indexed-job-3-ddpb5\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:16.086327       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-2698/hostexec-ip-172-20-57-184.us-east-2.compute.internal-zvb9n\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:18.952474       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6651/pvc-volume-tester-wqcw8\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:20.288803       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-69-2488/csi-mockplugin-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:20.348196       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-69-2488/csi-mockplugin-attacher-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:20.855500       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4763/pod-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:20.885563       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4763/pod-1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:20.916372       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-4763/pod-2\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:23.515902       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8316/hostexec-ip-172-20-57-184.us-east-2.compute.internal-v7g9s\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:25.494737       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-69/pvc-volume-tester-rt2w4\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:27.491221       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"proxy-6385/proxy-service-j78w8-v5mqz\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:29.596970       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-69/inline-volume-s2wcv\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:31.159368       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-8321/security-context-cb660c38-8cea-43c6-862b-2660bba0043b\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:33.646667       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-6394/hostexec-ip-172-20-55-216.us-east-2.compute.internal-jhx8w\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:40.256880       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8316/pod-subpath-test-preprovisionedpv-qhfm\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:40.288059       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-6394/pod-subpath-test-preprovisionedpv-8tl6\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:43.423354       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-9680/sample-webhook-deployment-78988fc6cd-shgpp\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:44.849936       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-3499/pod-projected-secrets-c9704e8e-83f7-46fd-bdd4-f2009bdc78d5\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:47.261619       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-4405/pod-subpath-test-inlinevolume-5nl9\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:47.372231       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6525/hostexec-ip-172-20-57-184.us-east-2.compute.internal-vxkjv\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:51.750065       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-7349/downwardapi-volume-219990dc-2d7d-42ea-801b-8002b9143dbd\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:51.821712       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"port-forwarding-8401/pfpod\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:53.135826       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3346-3851/csi-hostpathplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:53.299548       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3346/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0705 09:26:53.446611       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"dns-4522/dns-test-397fbad3-955d-42ea-a3bf-b15253a37311\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:53.769345       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-test-4623/busybox-readonly-fs1ac854e6-639e-41e8-98a3-1ddd3f112781\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:54.429444       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-expand-3860/pod-1b8cbc98-a76c-4e90-92e8-25c6642ac3f8\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:55.298740       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3346/hostpath-injector\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:56.603116       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4855-1511/csi-mockplugin-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:56.671455       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4855-1511/csi-mockplugin-attacher-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:26:56.832718       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"svcaccounts-3603/pod-service-account-32d0b383-d88d-4d8e-91e4-f2de62bfda6c\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:26:57.562756       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-24-2354/csi-mockplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:00.587460       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-686/pod-projected-secrets-3b8a414b-f453-436c-b6e5-be762d22db1d\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:03.488362       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9697-8537/csi-mockplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:03.548980       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9697-8537/csi-mockplugin-attacher-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:04.005794       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4855/pvc-volume-tester-rc9bc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:04.660203       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1434/hostexec-ip-172-20-55-216.us-east-2.compute.internal-vkjr9\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:07.905952       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-24/pvc-volume-tester-zsmbd\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:08.258805       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2798-6661/csi-hostpathplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:10.493341       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2798/pod-subpath-test-dynamicpv-zfr2\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:11.255893       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1434/pod-subpath-test-preprovisionedpv-bbbc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:13.802535       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"csi-mock-volumes-9697/pvc-volume-tester-l67hx\" err=\"0/5 nodes are available: 1 node(s) did not have enough free storage, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0705 09:27:16.466030       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-1360/hostexec-ip-172-20-52-221.us-east-2.compute.internal-hxh99\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:19.356221       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-1818/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:19.386696       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-1818/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:19.417674       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-1818/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:19.448510       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-1818/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:22.923975       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3346/hostpath-client\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0705 09:27:24.323413       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3346/hostpath-client\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:25.423249       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-1360/exec-volume-test-preprovisionedpv-g8hb\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:28.779986       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-4507/downward-api-622c9884-8d5b-464d-abba-4a8b80c05083\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:31.269700       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9775/hostexec-ip-172-20-38-136.us-east-2.compute.internal-tkcjd\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:39.757944       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-1818/test-container-pod\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:39.885292       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9775/pod-subpath-test-preprovisionedpv-sq8s\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:41.087442       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-7750/logs-generator\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:42.155095       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"sysctl-9031/sysctl-52319b06-556e-4c2e-b9d7-d55f69d27c8d\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:44.473231       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-9778/pod-configmaps-5cc41850-3a6e-4bfb-948d-edb4f8da8836\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:47.005330       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2030/hostexec-ip-172-20-52-221.us-east-2.compute.internal-mkgcw\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:49.529863       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2030/pod-b4be1ef3-035e-440b-a654-fef0dbaaa793\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0705 09:27:50.213047       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-410/pod-update-activedeadlineseconds-0e16fad6-bcd5-48a2-b0fe-73c9b834b561\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:51.037650       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-346/downward-api-54bbb239-66e8-43f3-8ad6-06d8e989022e\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:51.338086       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2030/pod-b4be1ef3-035e-440b-a654-fef0dbaaa793\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-5zhbd\\\" not found.\"\nE0705 09:27:51.340960       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-b4be1ef3-035e-440b-a654-fef0dbaaa793.168edaf75496883d\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-2030\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-2030\", Name:\"pod-b4be1ef3-035e-440b-a654-fef0dbaaa793\", UID:\"0f1b45da-2236-4c13-93fa-5eac8faf5c09\", APIVersion:\"v1\", ResourceVersion:\"40164\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-5zhbd\\\" not found.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc030d205d429623d, ext:1739346269134, loc:(*time.Location)(0x320d400)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc030d205d429623d, ext:1739346269134, loc:(*time.Location)(0x320d400)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-b4be1ef3-035e-440b-a654-fef0dbaaa793.168edaf75496883d\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2030 because it is being terminated' (will not retry!)\nI0705 09:27:53.338430       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2030/pod-b4be1ef3-035e-440b-a654-fef0dbaaa793\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-5zhbd\\\" not found.\"\nE0705 09:27:53.341920       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-b4be1ef3-035e-440b-a654-fef0dbaaa793.168edaf75496883d\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-2030\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-2030\", Name:\"pod-b4be1ef3-035e-440b-a654-fef0dbaaa793\", UID:\"0f1b45da-2236-4c13-93fa-5eac8faf5c09\", APIVersion:\"v1\", ResourceVersion:\"40218\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-5zhbd\\\" not found.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc030d205d429623d, ext:1739346269134, loc:(*time.Location)(0x320d400)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc030d206542e0995, ext:1741346574113, loc:(*time.Location)(0x320d400)}}, Count:2, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-b4be1ef3-035e-440b-a654-fef0dbaaa793.168edaf75496883d\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2030 because it is being terminated' (will not retry!)\nI0705 09:27:53.590290       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5390/hostexec-ip-172-20-57-184.us-east-2.compute.internal-zsbfl\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:54.457598       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4054/hostexec-ip-172-20-52-221.us-east-2.compute.internal-2ff5k\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:55.062430       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3991/hostexec-ip-172-20-57-184.us-east-2.compute.internal-fq2t2\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:56.532489       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pvc-protection-986/pvc-tester-m5ksd\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:56.932270       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-1000/externalsvc-4bndw\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:56.949180       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-1000/externalsvc-p6mzd\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:57.031416       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4054/pod-e09edd3b-a2af-490e-8b07-47ec521da221\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:27:57.130895       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-1628/downwardapi-volume-0a086999-e285-44a2-b519-93080fe35731\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:57.267808       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-1597/pod-a4b892d6-751a-44c3-b723-5ea3c64d4524\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:59.669437       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"disruption-2996/pod-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:27:59.710431       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4054/pod-123f65b1-6f67-420c-b000-b3466fbe655e\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:00.102450       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-1000/execpod4hsbd\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:02.285754       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"port-forwarding-1186/pfpod\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:03.863428       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-5345/aws-injector\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:05.105167       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-517-7851/csi-hostpathplugin-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:05.161712       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-517/inline-volume-tester-cgw6h\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:05.190555       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-6738/metadata-volume-1fd4dc68-5656-414a-8296-ccb3d855e9cc\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:07.664391       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1625/hostexec-ip-172-20-57-184.us-east-2.compute.internal-jjw78\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:09.292264       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-517/inline-volume-tester2-fhlfh\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:10.150863       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3991/pod-subpath-test-preprovisionedpv-gf5p\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:14.440769       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3991/pod-subpath-test-preprovisionedpv-gf5p\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:15.040089       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"sysctl-5364/sysctl-379e7a2c-1614-4900-8a7b-199b9106a940\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:15.490037       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-5345/aws-client\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:16.362741       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-9882/slow-terminating-unready-pod-w29z9\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:17.471726       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-8047/emptydir-injector\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:17.889665       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"secrets-1872/pod-configmaps-cfcde6f6-9077-490b-b8b9-f2645b3d39b2\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:20.202956       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"e2e-privileged-pod-8649/privileged-pod\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:20.260506       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-5356/pod-subpath-test-inlinevolume-pwxb\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:20.668263       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-9962/httpd\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:22.612636       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-5745/hostexec-ip-172-20-57-184.us-east-2.compute.internal-rqwsn\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:24.744426       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4871-2557/csi-mockplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:24.801101       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4871-2557/csi-mockplugin-attacher-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:26.458076       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1625/pod-subpath-test-preprovisionedpv-mqln\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:27.062625       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-1278/downwardapi-volume-98a88d14-6f35-430a-9f54-235d3174d767\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:29.617517       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"dns-3418/dns-test-1f91c664-824c-445b-83ce-9853f25077a0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:30.120087       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4871/pvc-volume-tester-qlnjs\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:32.005886       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6723-8340/csi-mockplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:32.098457       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6723-8340/csi-mockplugin-resizer-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:35.762607       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"var-expansion-4273/var-expansion-3176d265-1d0a-40c5-a7d7-6d9ba0540e17\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:39.427876       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6723/pvc-volume-tester-m8vm9\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:40.860704       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-8456/pod1\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:40.889163       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-8456/pod2\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:40.919499       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-8456/pod3\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:41.520629       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-5745/pod-subpath-test-preprovisionedpv-qnhw\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:43.958274       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"containers-220/client-containers-3457044a-dd94-4306-9140-3f389192a1d4\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:46.474030       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replicaset-9166/condition-test-57d9k\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:46.492240       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replicaset-9166/condition-test-s5z6p\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:46.513958       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pv-782/nfs-server\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:48.036755       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-7748/pod1\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:50.898032       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-6268/ss-1\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:51.224575       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pv-782/pvc-tester-hntwb\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:54.278403       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-7748/execpodlpjrc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:55.858739       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-6268/ss-2\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:28:57.233449       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2587-7830/csi-hostpathplugin-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:57.278006       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9024/hostexec-ip-172-20-52-221.us-east-2.compute.internal-8gtcr\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:28:59.456854       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2587/pod-subpath-test-dynamicpv-tdtl\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:07.396960       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-709/projected-volume-9d01daef-8a52-418c-99e3-c91ad4b71465\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:09.974191       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-9024/pod-subpath-test-preprovisionedpv-f8j8\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:10.004419       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-2239/e2e-test-httpd-pod\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:13.764263       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7651-865/csi-hostpathplugin-0\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:14.079530       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-test-7038/implicit-root-uid\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:14.956106       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7907/hostexec-ip-172-20-38-136.us-east-2.compute.internal-5g62w\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:15.935131       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"webhook-9363/sample-webhook-deployment-78988fc6cd-qhqnb\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:17.405569       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3446/hostexec-ip-172-20-57-184.us-east-2.compute.internal-8h2lk\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:18.046993       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-7651/pod-subpath-test-dynamicpv-t56r\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:18.986291       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubelet-test-504/bin-false4b778395-1cb6-496c-bb23-bc40be99e4f5\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:19.537048       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"emptydir-6183/pod-3d698870-ab4b-4c64-baa9-b67eb35a9775\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:20.894963       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3446/pod-8425392b-046e-44bf-abb9-0855ad0fe271\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:22.034357       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-2737/downwardapi-volume-062ef1d5-5718-4eb7-a384-c6a399d738cc\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:23.562202       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3446/pod-45a7cc57-4f8a-4daa-8472-94abc6daf2ea\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:24.529946       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-9070/pod-submit-remove-9b4ae1c2-4c99-45df-8673-10ed548c10ec\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:28.140594       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5163/hostexec-ip-172-20-52-221.us-east-2.compute.internal-9vgc4\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:31.067925       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5163/pod-3953ada2-406a-4fed-b9be-5e2c9b7184b2\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:33.730353       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5163/pod-f63a57df-17b5-498e-9902-6e8a292d9b4c\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:34.367159       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1314/hostexec-ip-172-20-55-216.us-east-2.compute.internal-thr9x\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:40.127547       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8934-5227/csi-hostpathplugin-0\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:40.955909       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-1314/pod-subpath-test-preprovisionedpv-wzv6\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:42.362519       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8934/pod-subpath-test-dynamicpv-68kl\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:45.572254       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-3555/security-context-a4041057-0772-450b-baae-dc8c3c6cf35c\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:47.940691       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"cross-namespace-pod-affinity-2524/no-cross-namespace-affinity\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0705 09:29:47.974393       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"cross-namespace-pod-affinity-2524/with-namespaces\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0705 09:29:48.070163       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"sysctl-6046/sysctl-fd50d3d4-f990-4efd-a2fd-b8c514c73781\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:50.124046       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"cross-namespace-pod-affinity-2524/with-namespace-selector\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0705 09:29:50.813264       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"containers-4267/client-containers-c7d67a64-9ce0-40e0-8e19-4f0a0282b517\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:51.908795       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"dns-3585/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:53.492441       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-4606/pod-projected-configmaps-b0a28d6a-f6a3-46f7-bf67-4846fcbc3152\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:55.953618       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replication-controller-1275/pod-release-zx8q4\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:56.051279       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replication-controller-1275/pod-release-xfhtx\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:29:56.366934       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-281/hostexec-ip-172-20-55-216.us-east-2.compute.internal-chdll\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:29:57.549080       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3613/hostexec-ip-172-20-57-184.us-east-2.compute.internal-zwhw4\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:00.125953       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"cronjob-4945/failed-jobs-history-limit-27091290--1-xt82b\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:06.614504       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-test-5246/busybox-readonly-true-2d805700-3589-4d1d-8e4f-1a8a078a3db7\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:07.961683       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"subpath-6217/pod-subpath-test-configmap-hhz7\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:09.000576       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-949/deployment-55649fd747-rf58n\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:09.008647       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-949/deployment-55649fd747-v4gph\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:09.021283       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-949/deployment-55649fd747-zxkp4\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:09.056119       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-949/deployment-55649fd747-n7tfq\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:09.064815       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"apply-949/deployment-55649fd747-scvqw\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:09.580085       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-515/pod-subpath-test-inlinevolume-mmd8\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:10.262865       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3613/pod-subpath-test-preprovisionedpv-pp8v\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:11.104624       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-281/pod-subpath-test-preprovisionedpv-p8h6\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:24.455304       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9585/inline-volume-4gqjb\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-4gqjb-my-volume\\\" not found.\"\nI0705 09:30:24.669915       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4741/hostexec-ip-172-20-55-216.us-east-2.compute.internal-7b2hq\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:27.452108       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4741/pod-3cef4d62-d9a7-477c-b0b2-3b22f4d753db\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:28.400536       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-9585-6304/csi-hostpathplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:28.479308       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9585/inline-volume-tester-t99pf\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-t99pf-my-volume-0\\\" not found.\"\nI0705 09:30:29.119573       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"security-context-8184/security-context-8e80886e-d92f-4579-9975-cbc41307f2c9\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:30.770680       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"statefulset-6342/ss-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:30.892086       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"dns-2792/e2e-configmap-dns-server-c86aee7b-73c5-4494-9b5f-55335df31c55\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:31.494354       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-9585/inline-volume-tester-t99pf\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:31.678507       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"port-forwarding-417/pfpod\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:31.992965       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2318/hostexec-ip-172-20-55-216.us-east-2.compute.internal-wqmlx\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:33.013268       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"dns-2792/e2e-dns-utils\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:34.601276       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2338/hostexec-ip-172-20-57-184.us-east-2.compute.internal-h7zln\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:36.133579       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-144/netserver-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:36.169430       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-144/netserver-1\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:36.196265       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-144/netserver-2\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:36.228471       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-144/netserver-3\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:38.958571       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"secrets-8713/pod-secrets-abe29f38-9149-475e-a2b0-59a220b3957d\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:40.208351       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-3496/test-recreate-deployment-6cb8b65c46-df4bb\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:40.602927       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9585/inline-volume-tester2-kfqcd\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester2-kfqcd-my-volume-0\\\" not found.\"\nI0705 09:30:41.296792       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2338/pod-subpath-test-preprovisionedpv-5sqq\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:41.463625       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-2318/pod-subpath-test-preprovisionedpv-xxhf\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:41.469821       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-7501/pod-4e071368-e54d-45c9-8637-b9f69a075242\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:41.501866       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8880/hostexec-ip-172-20-52-221.us-east-2.compute.internal-44sm8\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:42.407152       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"deployment-3496/test-recreate-deployment-85d47dcb4-5rttc\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:42.616978       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-2290/simpletest.rc-98v79\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:42.630498       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-2290/simpletest.rc-x4xcx\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:42.632074       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-2290/simpletest.rc-wsq4p\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:42.654857       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-2290/simpletest.rc-rbkrp\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:42.659901       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-2290/simpletest.rc-lf64d\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:42.660610       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-2290/simpletest.rc-hrds4\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:42.661057       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-2290/simpletest.rc-qrtc5\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:42.675532       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-2290/simpletest.rc-frqv6\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:42.680883       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-2290/simpletest.rc-crpvg\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:42.681091       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"gc-2290/simpletest.rc-j8tdq\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:42.947520       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"configmap-9898/pod-configmaps-3a589eb7-e3dc-4776-bf32-4c16a8692ac8\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:43.465342       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-9585/inline-volume-tester2-kfqcd\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:46.273285       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-9634/up-down-1-pdw5w\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:46.288562       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-9634/up-down-1-459nh\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:46.294779       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-9634/up-down-1-xqdh5\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:47.486833       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"dns-6483/dns-test-1a330042-ac9c-4464-a85d-1ce00e7cf577\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:48.603584       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volumemode-7501/hostexec-ip-172-20-52-221.us-east-2.compute.internal-2tjxc\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:50.611219       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-144/test-container-pod\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:50.642062       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"nettest-144/host-test-container-pod\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:52.389466       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-9634/up-down-2-znzb8\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:52.409693       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-9634/up-down-2-kpkss\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:52.417765       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-9634/up-down-2-99jz4\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:30:53.558176       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-3485/pod-subpath-test-inlinevolume-h4pq\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:56.263047       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8880/pod-subpath-test-preprovisionedpv-r7xr\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:57.194755       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-6030/hostexec-ip-172-20-38-136.us-east-2.compute.internal-qfnmk\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:57.298337       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7577-2/csi-mockplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:57.328036       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7577-2/csi-mockplugin-attacher-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:30:58.483454       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-9634/verify-service-up-host-exec-pod\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:00.132490       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"cronjob-4945/failed-jobs-history-limit-27091291--1-q9j4g\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:00.599732       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"services-9634/verify-service-up-exec-pod-zg265\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:02.596797       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7577/pvc-volume-tester-7ctp2\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:02.820953       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"conntrack-1305/pod-client\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:06.973138       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"conntrack-1305/pod-server-1\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:07.046254       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"projected-2574/pod-projected-configmaps-90dbdab5-8dea-44b8-8045-f6f7f0d66b88\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:09.925570       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-6030/pod-subpath-test-preprovisionedpv-4959\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:10.133422       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"provisioning-8141/pod-subpath-test-inlinevolume-6ndm\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:10.711931       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4046-3242/csi-mockplugin-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:10.768127       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4046-3242/csi-mockplugin-resizer-0\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:10.773305       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pv-452/nfs-server\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:13.576895       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pv-452/pvc-tester-7cx26\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:14.999959       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"prestop-2100/pod-prestop-hook-1e1f02af-0321-4c04-89f9-73971107c04f\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:15.136162       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"downward-api-7470/downwardapi-volume-bbcd1753-3ac3-4d56-aeae-6d80545b36ad\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:16.063467       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4046/pvc-volume-tester-v65pg\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:18.234898       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"kubectl-2788/httpd\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:20.630679       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-7532/hostexec-ip-172-20-57-184.us-east-2.compute.internal-mb42d\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:21.779263       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9612/inline-volume-dldxx\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-dldxx-my-volume\\\" not found.\"\nI0705 09:31:23.422487       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-9612-4830/csi-hostpathplugin-0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:23.500164       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9612/inline-volume-tester-c6tj2\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-c6tj2-my-volume-0\\\" not found.\"\nI0705 09:31:25.199052       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"volume-7532/exec-volume-test-preprovisionedpv-8bsd\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:25.485458       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9612/inline-volume-tester-c6tj2\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0705 09:31:27.486234       1 factory.go:382] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9612/inline-volume-tester-c6tj2\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0705 09:31:27.993065       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"crd-webhook-6782/sample-crd-conversion-webhook-deployment-697cdbd8f4-wslrh\" node=\"ip-172-20-52-221.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:28.055508       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"pods-8806/pod-exec-websocket-0434ac71-0ba7-49bb-96ac-1a8442565ac0\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:30.677204       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replicaset-4903/test-rs-62nv7\" node=\"ip-172-20-55-216.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:30.695262       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replicaset-4903/test-rs-95skt\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:30.701909       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"replicaset-4903/test-rs-nvmwn\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0705 09:31:31.490787       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"ephemeral-9612/inline-volume-tester-c6tj2\" node=\"ip-172-20-38-136.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0705 09:31:32.428006       1 scheduler.go:662] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4046/pvc-volume-tester-s9jv7\" node=\"ip-172-20-57-184.us-east-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\n==== END logs for container kube-scheduler of pod kube-system/kube-scheduler-ip-172-20-53-155.us-east-2.compute.internal ====\n==== START logs for container nginx of pod kube-system/metrics-proxy ====\n==== END logs for container nginx of pod kube-system/metrics-proxy ====\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"20512\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"46054\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"46055\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"46055\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"46056\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"46056\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"46057\"\n    },\n    \"items\": []\n}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:31:33.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-908" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds","total":-1,"completed":48,"skipped":481,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:31:33.751: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
Jul  5 09:31:02.491: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-hbtzp] to have phase Bound
Jul  5 09:31:02.521: INFO: PersistentVolumeClaim pvc-hbtzp found and phase=Bound (29.717533ms)
STEP: Deleting the previously created pod
Jul  5 09:31:08.674: INFO: Deleting pod "pvc-volume-tester-7ctp2" in namespace "csi-mock-volumes-7577"
Jul  5 09:31:08.709: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7ctp2" to be fully deleted
STEP: Checking CSI driver logs
Jul  5 09:31:14.803: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/02068c7b-5aee-4f2c-a4ec-0a4048aa55c6/volumes/kubernetes.io~csi/pvc-d8b28cf3-530a-49ad-a137-7ad9beffe59f/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-7ctp2
Jul  5 09:31:14.803: INFO: Deleting pod "pvc-volume-tester-7ctp2" in namespace "csi-mock-volumes-7577"
STEP: Deleting claim pvc-hbtzp
Jul  5 09:31:14.893: INFO: Waiting up to 2m0s for PersistentVolume pvc-d8b28cf3-530a-49ad-a137-7ad9beffe59f to get deleted
Jul  5 09:31:14.926: INFO: PersistentVolume pvc-d8b28cf3-530a-49ad-a137-7ad9beffe59f found and phase=Released (33.172752ms)
Jul  5 09:31:16.956: INFO: PersistentVolume pvc-d8b28cf3-530a-49ad-a137-7ad9beffe59f was removed
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should not be passed when CSIDriver does not exist
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":19,"skipped":138,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 09:31:36.119: INFO: Only supported for providers [azure] (not aws)
... skipping 50 lines ...
• [SLOW TEST:22.400 seconds]
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  graceful pod terminated should wait until preStop hook completes the process
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170
------------------------------
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":22,"skipped":154,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
Jul  5 09:31:37.236: INFO: Running AfterSuite actions on all nodes
Jul  5 09:31:37.236: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:31:37.236: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:31:37.236: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:31:37.236: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:31:37.236: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":28,"skipped":215,"failed":2,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}
Jul  5 09:31:41.447: INFO: Running AfterSuite actions on all nodes
Jul  5 09:31:41.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:31:41.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:31:41.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:31:41.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:31:41.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 30 lines ...
• [SLOW TEST:67.681 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":167,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
Jul  5 09:31:50.337: INFO: Running AfterSuite actions on all nodes
Jul  5 09:31:50.337: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:31:50.337: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:31:50.337: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:31:50.337: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:31:50.337: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 75 lines ...
Mon Jul  5 09:31:59 UTC 2021 Try: 18

Mon Jul  5 09:32:04 UTC 2021 Try: 19

Mon Jul  5 09:32:09 UTC 2021 Try: 20

Jul  5 09:32:09.316: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0000d2900)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc0000d2900)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b
... skipping 237 lines ...
• Failure [68.488 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:208

  Jul  5 09:32:09.316: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":23,"skipped":231,"failed":2,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service"]}
Jul  5 09:32:11.087: INFO: Running AfterSuite actions on all nodes
Jul  5 09:32:11.087: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:32:11.087: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:32:11.087: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:32:11.087: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:32:11.087: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 25 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":49,"skipped":493,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]}
Jul  5 09:32:14.124: INFO: Running AfterSuite actions on all nodes
Jul  5 09:32:14.124: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:32:14.124: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:32:14.124: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:32:14.124: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:32:14.124: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 117 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":49,"skipped":308,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
Jul  5 09:32:17.965: INFO: Running AfterSuite actions on all nodes
Jul  5 09:32:17.965: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:32:17.965: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:32:17.965: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:32:17.965: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:32:17.965: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 140 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:183
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":34,"skipped":257,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-network] Services should implement service.kubernetes.io/headless"]}
Jul  5 09:32:19.233: INFO: Running AfterSuite actions on all nodes
Jul  5 09:32:19.233: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:32:19.233: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:32:19.233: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:32:19.233: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:32:19.233: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  5 09:32:19.233: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  5 09:32:19.233: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":29,"skipped":227,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:31:27.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  5 09:31:31.145: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  5 09:31:31.180: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 09:32:03.351: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-6126-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-6782.svc:9443/crdconvert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jul  5 09:32:33.483: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-6126-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-6782.svc:9443/crdconvert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jul  5 09:33:03.515: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-6126-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-6782.svc:9443/crdconvert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jul  5 09:33:03.516: FAIL: Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 227 lines ...
• Failure [98.566 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:33:03.516: Unexpected error:
      <*errors.errorString | 0xc00023e240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:499
------------------------------
{"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":29,"skipped":227,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
Jul  5 09:33:05.578: INFO: Running AfterSuite actions on all nodes
Jul  5 09:33:05.578: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:33:05.578: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:33:05.578: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:33:05.578: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:33:05.578: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 140 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:183
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":35,"skipped":238,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
Jul  5 09:33:18.240: INFO: Running AfterSuite actions on all nodes
Jul  5 09:33:18.240: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:33:18.240: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:33:18.240: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:33:18.240: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:33:18.240: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 24 lines ...
Jul  5 09:30:31.906: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3418.svc.cluster.local from pod dns-3418/dns-test-1f91c664-824c-445b-83ce-9853f25077a0: the server is currently unable to handle the request (get pods dns-test-1f91c664-824c-445b-83ce-9853f25077a0)
Jul  5 09:31:01.949: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-3418.svc.cluster.local from pod dns-3418/dns-test-1f91c664-824c-445b-83ce-9853f25077a0: the server is currently unable to handle the request (get pods dns-test-1f91c664-824c-445b-83ce-9853f25077a0)
Jul  5 09:31:31.981: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-3418.svc.cluster.local from pod dns-3418/dns-test-1f91c664-824c-445b-83ce-9853f25077a0: the server is currently unable to handle the request (get pods dns-test-1f91c664-824c-445b-83ce-9853f25077a0)
Jul  5 09:32:02.015: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3418/dns-test-1f91c664-824c-445b-83ce-9853f25077a0: the server is currently unable to handle the request (get pods dns-test-1f91c664-824c-445b-83ce-9853f25077a0)
Jul  5 09:32:32.047: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3418/dns-test-1f91c664-824c-445b-83ce-9853f25077a0: the server is currently unable to handle the request (get pods dns-test-1f91c664-824c-445b-83ce-9853f25077a0)
Jul  5 09:33:02.080: INFO: Unable to read 100.66.28.60_udp@PTR from pod dns-3418/dns-test-1f91c664-824c-445b-83ce-9853f25077a0: the server is currently unable to handle the request (get pods dns-test-1f91c664-824c-445b-83ce-9853f25077a0)
Jul  5 09:33:31.746: FAIL: Unable to read 100.66.28.60_tcp@PTR from pod dns-3418/dns-test-1f91c664-824c-445b-83ce-9853f25077a0: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-3418/pods/dns-test-1f91c664-824c-445b-83ce-9853f25077a0/proxy/results/100.66.28.60_tcp@PTR": context deadline exceeded

Full Stack Trace
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7f9b3f40d878, 0x18, 0xc002637698)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc00005e058, 0xc002239290, 0x29e9900, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b
testing.tRunner(0xc000390c00, 0x71cf618)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
E0705 09:33:31.747653   12675 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jul  5 09:33:31.746: Unable to read 100.66.28.60_tcp@PTR from pod dns-3418/dns-test-1f91c664-824c-445b-83ce-9853f25077a0: Get \"https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-3418/pods/dns-test-1f91c664-824c-445b-83ce-9853f25077a0/proxy/results/100.66.28.60_tcp@PTR\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7f9b3f40d878, 0x18, 0xc002637698)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc00005e058, 0xc002239290, 0x29e9900, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x780f3c8, 0xc00005e058, 0xc002637601, 0xc002637698, 0xc002239290, 0x67ba9a0, 0xc002239290)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x780f3c8, 0xc00005e058, 0x12a05f200, 0x8bb2c97000, 0xc002239290, 0x6cf83e0, 0x24f8401)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc00017c8c0, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc00437bb00, 0x14, 0x18, 0x6fb5f5e, 0x7, 0xc0024db400, 0x78a18a8, 0xc002ef3ce0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000ed0160, 0xc0024db400, 0xc00437bb00, 0x14, 0x18)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.5()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xe85\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000390c00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000390c00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b\ntesting.tRunner(0xc000390c00, 0x71cf618)\n\t/usr/local/go/src/testing/testing.go:1193 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1238 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6b4ac20, 0xc0036c22c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x6b4ac20, 0xc0036c22c0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc002116160, 0x141, 0x87cadfb, 0x7d, 0xd9, 0xc000bd4400, 0xa8c)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x628e540, 0x76c5570)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc002116160, 0x141, 0xc0027475e0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc002116160, 0x141, 0xc0027476c8, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x7059d05, 0x24, 0xc002747928, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7f9b3f40d878, 0x18, 0xc002637698)
... skipping 256 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:33:31.746: Unable to read 100.66.28.60_tcp@PTR from pod dns-3418/dns-test-1f91c664-824c-445b-83ce-9853f25077a0: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-3418/pods/dns-test-1f91c664-824c-445b-83ce-9853f25077a0/proxy/results/100.66.28.60_tcp@PTR": context deadline exceeded

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":20,"skipped":222,"failed":5,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}
Jul  5 09:33:33.537: INFO: Running AfterSuite actions on all nodes
Jul  5 09:33:33.537: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:33:33.537: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:33:33.537: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:33:33.537: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:33:33.537: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 278 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  32s   default-scheduler  Successfully assigned pod-network-test-3186/netserver-3 to ip-172-20-57-184.us-east-2.compute.internal
  Normal  Pulled     32s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    32s   kubelet            Created container webserver
  Normal  Started    32s   kubelet            Started container webserver

Jul  5 09:16:29.659: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.3.190&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Jul  5 09:16:29.659: INFO: ...failed...will try again in next pass
Jul  5 09:16:29.659: INFO: Breadth first check of 100.96.1.133 on host 172.20.52.221...
Jul  5 09:16:29.690: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.1.133&port=8081&tries=1'] Namespace:pod-network-test-3186 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 09:16:29.690: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 09:16:34.951: INFO: Waiting for responses: map[netserver-1:{}]
Jul  5 09:16:36.952: INFO: 
Output of kubectl describe pod pod-network-test-3186/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  40s   default-scheduler  Successfully assigned pod-network-test-3186/netserver-3 to ip-172-20-57-184.us-east-2.compute.internal
  Normal  Pulled     40s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    40s   kubelet            Created container webserver
  Normal  Started    40s   kubelet            Started container webserver

Jul  5 09:16:37.874: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.1.133&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Jul  5 09:16:37.874: INFO: ...failed...will try again in next pass
Jul  5 09:16:37.874: INFO: Breadth first check of 100.96.4.217 on host 172.20.55.216...
Jul  5 09:16:37.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.4.217&port=8081&tries=1'] Namespace:pod-network-test-3186 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 09:16:37.903: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 09:16:43.149: INFO: Waiting for responses: map[netserver-2:{}]
Jul  5 09:16:45.149: INFO: 
Output of kubectl describe pod pod-network-test-3186/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  49s   default-scheduler  Successfully assigned pod-network-test-3186/netserver-3 to ip-172-20-57-184.us-east-2.compute.internal
  Normal  Pulled     49s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    49s   kubelet            Created container webserver
  Normal  Started    49s   kubelet            Started container webserver

Jul  5 09:16:46.076: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.4.217&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Jul  5 09:16:46.076: INFO: ...failed...will try again in next pass
Jul  5 09:16:46.076: INFO: Breadth first check of 100.96.2.168 on host 172.20.57.184...
Jul  5 09:16:46.106: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.2.168&port=8081&tries=1'] Namespace:pod-network-test-3186 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 09:16:46.106: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 09:16:46.385: INFO: Waiting for responses: map[]
Jul  5 09:16:46.385: INFO: reached 100.96.2.168 after 0/1 tries
Jul  5 09:16:46.385: INFO: Going to retry 3 out of 4 pods....
... skipping 382 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m26s  default-scheduler  Successfully assigned pod-network-test-3186/netserver-3 to ip-172-20-57-184.us-east-2.compute.internal
  Normal  Pulled     6m26s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    6m26s  kubelet            Created container webserver
  Normal  Started    6m26s  kubelet            Started container webserver

Jul  5 09:22:23.176: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.3.190&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Jul  5 09:22:23.176: INFO: ... Done probing pod [[[ 100.96.3.190 ]]]
Jul  5 09:22:23.176: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  12m   default-scheduler  Successfully assigned pod-network-test-3186/netserver-3 to ip-172-20-57-184.us-east-2.compute.internal
  Normal  Pulled     12m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    12m   kubelet            Created container webserver
  Normal  Started    12m   kubelet            Started container webserver

Jul  5 09:28:00.348: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.1.133&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Jul  5 09:28:00.348: INFO: ... Done probing pod [[[ 100.96.1.133 ]]]
Jul  5 09:28:00.348: INFO: succeeded at polling 2 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  17m   default-scheduler  Successfully assigned pod-network-test-3186/netserver-3 to ip-172-20-57-184.us-east-2.compute.internal
  Normal  Pulled     17m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    17m   kubelet            Created container webserver
  Normal  Started    17m   kubelet            Started container webserver

Jul  5 09:33:37.847: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.4.217&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Jul  5 09:33:37.847: INFO: ... Done probing pod [[[ 100.96.4.217 ]]]
Jul  5 09:33:37.847: INFO: succeeded at polling 1 out of 4 connections
Jul  5 09:33:37.847: INFO: pod polling failure summary:
Jul  5 09:33:37.847: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.3.190&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]
Jul  5 09:33:37.847: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.1.133&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}]
Jul  5 09:33:37.847: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.174:9080/dial?request=hostname&protocol=udp&host=100.96.4.217&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Jul  5 09:33:37.848: FAIL: failed,  3 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000833c80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 222 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul  5 09:33:37.848: failed,  3 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":60,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}
Jul  5 09:33:39.548: INFO: Running AfterSuite actions on all nodes
Jul  5 09:33:39.548: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:33:39.548: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:33:39.548: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:33:39.548: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:33:39.548: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 105 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Jul  5 09:28:37.208: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9962 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Jul  5 09:28:37.826: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Jul  5 09:28:37.827: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9962 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Jul  5 09:28:38.304: INFO: rc: 255
Jul  5 09:28:38.304: INFO: got err error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9962 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0705 09:28:38.288559     208 merged_client_builder.go:163] Using in-cluster namespace
I0705 09:28:38.288786     208 merged_client_builder.go:121] Using in-cluster configuration
I0705 09:28:38.291156     208 merged_client_builder.go:121] Using in-cluster configuration
I0705 09:28:38.294538     208 merged_client_builder.go:121] Using in-cluster configuration
I0705 09:28:38.295215     208 round_trippers.go:432] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-9962/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0705 09:28:38.306933     208 helpers.go:116] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc0000a2a80, 0x68, 0x1af)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307ec20, 0xc000000003, 0x0, 0x0, 0xc0004d8d90, 0x2, 0x2610819, 0xa, 0x74, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x307ec20, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc000817120, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00014a240, 0x3a, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2095a40, 0xc00000d260, 0x1f1b498)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:178 +0x8a3
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000552000, 0xc0004f0ed0, 0x1, 0x3)
... skipping 66 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Jul  5 09:28:38.304: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9962 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Jul  5 09:31:08.762: INFO: rc: 255
Jul  5 09:31:08.762: INFO: got err error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9962 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0705 09:28:38.764402     219 merged_client_builder.go:163] Using in-cluster namespace
I0705 09:29:08.765356     219 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30000 milliseconds
I0705 09:29:08.765455     219 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0705 09:29:38.766330     219 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30000 milliseconds
I0705 09:29:38.766401     219 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0705 09:29:38.766429     219 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0705 09:30:08.767381     219 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30000 milliseconds
I0705 09:30:08.767458     219 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0705 09:30:38.768421     219 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30000 milliseconds
I0705 09:30:38.768490     219 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0705 09:31:08.772238     219 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30003 milliseconds
I0705 09:31:08.772308     219 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0705 09:31:08.772349     219 helpers.go:235] Connection error: Get http://invalid/api?timeout=32s: dial tcp: i/o timeout
F0705 09:31:08.772364     219 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc00047a140, 0x65, 0x9a)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307ec20, 0xc000000003, 0x0, 0x0, 0xc0005faaf0, 0x2, 0x2610819, 0xa, 0x74, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x307ec20, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0005f8b30, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0005ea240, 0x36, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2094d80, 0xc0005dea80, 0x1f1b498)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0001d3180, 0xc0001783c0, 0x1, 0x3)
... skipping 94 lines ...
	/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Jul  5 09:31:08.762: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9962 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Jul  5 09:31:09.373: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Jul  5 09:31:09.373: INFO: stdout: "I0705 09:31:09.357020     230 merged_client_builder.go:121] Using in-cluster configuration\nI0705 09:31:09.360977     230 merged_client_builder.go:121] Using in-cluster configuration\nI0705 09:31:09.367646     230 merged_client_builder.go:121] Using in-cluster configuration\nI0705 09:31:09.387003     230 round_trippers.go:454] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 19 milliseconds\nNo resources found in invalid namespace.\n"
Jul  5 09:31:09.373: INFO: stdout: I0705 09:31:09.357020     230 merged_client_builder.go:121] Using in-cluster configuration
... skipping 7 lines ...
Jul  5 09:33:39.918: INFO: rc: 255
Jul  5 09:33:39.919: INFO: stdout: I0705 09:31:09.898445     242 loader.go:372] Config loaded from file:  /tmp/icc-override.kubeconfig
I0705 09:31:39.900833     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
I0705 09:31:39.900913     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0705 09:32:09.902737     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30001 milliseconds
I0705 09:32:09.902812     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0705 09:32:09.902843     242 shortcut.go:89] Error loading discovery information: Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0705 09:32:39.903603     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
I0705 09:32:39.903679     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0705 09:33:09.904383     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
I0705 09:33:09.904542     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0705 09:33:39.905595     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
I0705 09:33:39.905690     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
I0705 09:33:39.905764     242 helpers.go:235] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: i/o timeout
F0705 09:33:39.905786     242 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc0003c0000, 0x65, 0xb7)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307ec20, 0xc000000003, 0x0, 0x0, 0xc000198000, 0x2, 0x2610819, 0xa, 0x74, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x307ec20, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc00004cc00, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00014a500, 0x36, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2094d80, 0xc000569da0, 0x1f1b498)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000434500, 0xc000568fc0, 0x1, 0x3)
... skipping 84 lines ...
	/usr/local/go/src/net/dnsclient_unix.go:255 +0x347
net.(*Resolver).goLookupIPCNAMEOrder.func3.1(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0005f4140, 0xc0003c9200, 0x25, 0xc000332300, 0xc00063001c)
	/usr/local/go/src/net/dnsclient_unix.go:601 +0xbb
created by net.(*Resolver).goLookupIPCNAMEOrder.func3
	/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8

Jul  5 09:33:39.920: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9962 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1:\nCommand stdout:\nI0705 09:31:09.898445     242 loader.go:372] Config loaded from file:  /tmp/icc-override.kubeconfig\nI0705 09:31:39.900833     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0705 09:31:39.900913     242 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:32:09.902737     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30001 milliseconds\nI0705 09:32:09.902812     242 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:32:09.902843     242 shortcut.go:89] Error loading discovery information: Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:32:39.903603     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0705 09:32:39.903679     242 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:33:09.904383     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0705 09:33:09.904542     242 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:33:39.905595     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0705 09:33:39.905690     242 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:33:39.905764     242 helpers.go:235] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: i/o timeout\nF0705 09:33:39.905786     242 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout\ngoroutine 1 [running]:\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc0003c0000, 0x65, 0xb7)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307ec20, 0xc000000003, 0x0, 0x0, 0xc000198000, 0x2, 0x2610819, 0xa, 0x74, 0x40e300)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x307ec20, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc00004cc00, 0x1, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00014a500, 0x36, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2094d80, 0xc000569da0, 0x1f1b498)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000434500, 0xc000568fc0, 0x1, 0x3)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:167 +0x159\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000434500, 0xc000568f90, 0x3, 0x3, 0xc000434500, 0xc000568f90)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856 +0x2c2\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00039d900, 0xc000132120, 0xc000100050, 0x5)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960 +0x375\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897\nmain.main()\n\t_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d\n\ngoroutine 18 [chan receive]:\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x307ec20)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b\ncreated by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:420 +0xdf\n\ngoroutine 6 [select]:\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1f1b3b8, 0x2093260, 0xc00035e000, 0x1, 0xc000102b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1f1b3b8, 0x12a05f200, 0x0, 0x1, 0xc000102b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x1f1b3b8, 0x12a05f200, 0xc000102b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d\ncreated by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96\n\ngoroutine 95 [chan receive]:\nnet.(*Resolver).goLookupIPCNAMEOrder.func4(0xc0003c9200, 0x25, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:607 +0xab\nnet.(*Resolver).goLookupIPCNAMEOrder(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc000174080, 0x16, 0x1, 0x0, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:617 +0x806\nnet.(*Resolver).lookupIP(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0x1e1af99, 0x3, 0xc000174080, 0x16, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/lookup_unix.go:102 +0xe5\nnet.glob..func1(0x20cb9a0, 0xc0003f0cc0, 0xc000321fc0, 0x1e1af99, 0x3, 0xc000174080, 0x16, 0xc0002f5c20, 0x0, 0xc0004bf6e0, ...)\n\t/usr/local/go/src/net/hook.go:23 +0x72\nnet.(*Resolver).lookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/lookup.go:293 +0xba\ninternal/singleflight.(*Group).doCall(0x307d3f0, 0xc000100780, 0xc000174180, 0x1a, 0xc0003f0d00)\n\t/usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e\ncreated by internal/singleflight.(*Group).DoChan\n\t/usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc\n\ngoroutine 123 [IO wait]:\ninternal/poll.runtime_pollWait(0x7efceafca4d8, 0x72, 0xffffffffffffffff)\n\t/usr/local/go/src/runtime/netpoll.go:222 +0x55\ninternal/poll.(*pollDesc).wait(0xc000388f98, 0x72, 0x200, 0x200, 0xffffffffffffffff)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45\ninternal/poll.(*pollDesc).waitRead(...)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:92\ninternal/poll.(*FD).Read(0xc000388f80, 0xc000670800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5\nnet.(*netFD).Read(0xc000388f80, 0xc000670800, 0x200, 0x200, 0x0, 0xc0000963e0, 0x450c8c)\n\t/usr/local/go/src/net/fd_posix.go:55 +0x4f\nnet.(*conn).Read(0xc000442028, 0xc000670800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/net.go:183 +0x91\nnet.dnsPacketRoundTrip(0x20dfa40, 0xc000442028, 0x6e726562756bbebf, 0x6665642e73657465, 0x6376732e746c7561, 0x72657473756c632e, 0x2e6c61636f6c2e, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:86 +0x135\nnet.(*Resolver).exchange(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0006041f0, 0xe, 0x74656e726562756b, 0x75616665642e7365, 0x632e6376732e746c, 0x6c2e72657473756c, 0x2e6c61636f, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:165 +0x4a8\nnet.(*Resolver).tryOneName(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0005f4140, 0xc0003c9200, 0x25, 0xc000300001, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:255 +0x347\nnet.(*Resolver).goLookupIPCNAMEOrder.func3.1(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0005f4140, 0xc0003c9200, 0x25, 0xc000332300, 0xc000630001)\n\t/usr/local/go/src/net/dnsclient_unix.go:601 +0xbb\ncreated by net.(*Resolver).goLookupIPCNAMEOrder.func3\n\t/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8\n\ngoroutine 124 [IO wait]:\ninternal/poll.runtime_pollWait(0x7efceafca3f0, 0x72, 0xffffffffffffffff)\n\t/usr/local/go/src/runtime/netpoll.go:222 +0x55\ninternal/poll.(*pollDesc).wait(0xc00050ec18, 0x72, 0x200, 0x200, 0xffffffffffffffff)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45\ninternal/poll.(*pollDesc).waitRead(...)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:92\ninternal/poll.(*FD).Read(0xc00050ec00, 0xc0007e0800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5\nnet.(*netFD).Read(0xc00050ec00, 0xc0007e0800, 0x200, 0x200, 0x0, 0xc00066e3e0, 0x450c8c)\n\t/usr/local/go/src/net/fd_posix.go:55 +0x4f\nnet.(*conn).Read(0xc00000e038, 0xc0007e0800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/net.go:183 +0x91\nnet.dnsPacketRoundTrip(0x20dfa40, 0xc00000e038, 0x6e726562756b5b61, 0x6665642e73657465, 0x6376732e746c7561, 0x72657473756c632e, 0x2e6c61636f6c2e, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:86 +0x135\nnet.(*Resolver).exchange(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0006041f0, 0xe, 0x74656e726562756b, 0x75616665642e7365, 0x632e6376732e746c, 0x6c2e72657473756c, 0x2e6c61636f, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:165 +0x4a8\nnet.(*Resolver).tryOneName(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0005f4140, 0xc0003c9200, 0x25, 0x1c, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:255 +0x347\nnet.(*Resolver).goLookupIPCNAMEOrder.func3.1(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0005f4140, 0xc0003c9200, 0x25, 0xc000332300, 0xc00063001c)\n\t/usr/local/go/src/net/dnsclient_unix.go:601 +0xbb\ncreated by net.(*Resolver).goLookupIPCNAMEOrder.func3\n\t/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8\n\nstderr:\n+ /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'\ncommand terminated with exit code 255\n\nerror:\nexit status 255",
        },
        Code: 255,
    }
    error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9962 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1:
    Command stdout:
    I0705 09:31:09.898445     242 loader.go:372] Config loaded from file:  /tmp/icc-override.kubeconfig
    I0705 09:31:39.900833     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
    I0705 09:31:39.900913     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0705 09:32:09.902737     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30001 milliseconds
    I0705 09:32:09.902812     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0705 09:32:09.902843     242 shortcut.go:89] Error loading discovery information: Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0705 09:32:39.903603     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
    I0705 09:32:39.903679     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0705 09:33:09.904383     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
    I0705 09:33:09.904542     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0705 09:33:39.905595     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
    I0705 09:33:39.905690     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
    I0705 09:33:39.905764     242 helpers.go:235] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: i/o timeout
    F0705 09:33:39.905786     242 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout
    goroutine 1 [running]:
    k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc0003c0000, 0x65, 0xb7)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
    k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307ec20, 0xc000000003, 0x0, 0x0, 0xc000198000, 0x2, 0x2610819, 0xa, 0x74, 0x40e300)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
    k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x307ec20, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc00004cc00, 0x1, 0x1)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
    k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
    k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00014a500, 0x36, 0x1)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
    k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2094d80, 0xc000569da0, 0x1f1b498)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935
    k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
    k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000434500, 0xc000568fc0, 0x1, 0x3)
... skipping 88 lines ...
    	/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8
    
    stderr:
    + /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'
    command terminated with exit code 255
    
    error:
    exit status 255
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.RunHostCmdOrDie(0xc00410f530, 0xc, 0x6fb0c19, 0x5, 0xc003d28050, 0x4a, 0xb, 0xc003d28050)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1102 +0x225
... skipping 216 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should handle in-cluster config [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:646

    Jul  5 09:33:39.921: Unexpected error:
        <exec.CodeExitError>: {
            Err: {
                s: "error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9962 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1:\nCommand stdout:\nI0705 09:31:09.898445     242 loader.go:372] Config loaded from file:  /tmp/icc-override.kubeconfig\nI0705 09:31:39.900833     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0705 09:31:39.900913     242 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:32:09.902737     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30001 milliseconds\nI0705 09:32:09.902812     242 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:32:09.902843     242 shortcut.go:89] Error loading discovery information: Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:32:39.903603     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0705 09:32:39.903679     242 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:33:09.904383     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0705 09:33:09.904542     242 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:33:39.905595     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds\nI0705 09:33:39.905690     242 cached_discovery.go:121] skipped caching discovery info due to Get \"https://kubernetes.default.svc:443/api?timeout=32s\": dial tcp: i/o timeout\nI0705 09:33:39.905764     242 helpers.go:235] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: i/o timeout\nF0705 09:33:39.905786     242 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout\ngoroutine 1 [running]:\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc0003c0000, 0x65, 0xb7)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307ec20, 0xc000000003, 0x0, 0x0, 0xc000198000, 0x2, 0x2610819, 0xa, 0x74, 0x40e300)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x307ec20, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc00004cc00, 0x1, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00014a500, 0x36, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2094d80, 0xc000569da0, 0x1f1b498)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000434500, 0xc000568fc0, 0x1, 0x3)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:167 +0x159\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000434500, 0xc000568f90, 0x3, 0x3, 0xc000434500, 0xc000568f90)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856 +0x2c2\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00039d900, 0xc000132120, 0xc000100050, 0x5)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960 +0x375\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897\nmain.main()\n\t_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d\n\ngoroutine 18 [chan receive]:\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x307ec20)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b\ncreated by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:420 +0xdf\n\ngoroutine 6 [select]:\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1f1b3b8, 0x2093260, 0xc00035e000, 0x1, 0xc000102b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1f1b3b8, 0x12a05f200, 0x0, 0x1, 0xc000102b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x1f1b3b8, 0x12a05f200, 0xc000102b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d\ncreated by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96\n\ngoroutine 95 [chan receive]:\nnet.(*Resolver).goLookupIPCNAMEOrder.func4(0xc0003c9200, 0x25, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:607 +0xab\nnet.(*Resolver).goLookupIPCNAMEOrder(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc000174080, 0x16, 0x1, 0x0, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:617 +0x806\nnet.(*Resolver).lookupIP(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0x1e1af99, 0x3, 0xc000174080, 0x16, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/lookup_unix.go:102 +0xe5\nnet.glob..func1(0x20cb9a0, 0xc0003f0cc0, 0xc000321fc0, 0x1e1af99, 0x3, 0xc000174080, 0x16, 0xc0002f5c20, 0x0, 0xc0004bf6e0, ...)\n\t/usr/local/go/src/net/hook.go:23 +0x72\nnet.(*Resolver).lookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/lookup.go:293 +0xba\ninternal/singleflight.(*Group).doCall(0x307d3f0, 0xc000100780, 0xc000174180, 0x1a, 0xc0003f0d00)\n\t/usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e\ncreated by internal/singleflight.(*Group).DoChan\n\t/usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc\n\ngoroutine 123 [IO wait]:\ninternal/poll.runtime_pollWait(0x7efceafca4d8, 0x72, 0xffffffffffffffff)\n\t/usr/local/go/src/runtime/netpoll.go:222 +0x55\ninternal/poll.(*pollDesc).wait(0xc000388f98, 0x72, 0x200, 0x200, 0xffffffffffffffff)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45\ninternal/poll.(*pollDesc).waitRead(...)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:92\ninternal/poll.(*FD).Read(0xc000388f80, 0xc000670800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5\nnet.(*netFD).Read(0xc000388f80, 0xc000670800, 0x200, 0x200, 0x0, 0xc0000963e0, 0x450c8c)\n\t/usr/local/go/src/net/fd_posix.go:55 +0x4f\nnet.(*conn).Read(0xc000442028, 0xc000670800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/net.go:183 +0x91\nnet.dnsPacketRoundTrip(0x20dfa40, 0xc000442028, 0x6e726562756bbebf, 0x6665642e73657465, 0x6376732e746c7561, 0x72657473756c632e, 0x2e6c61636f6c2e, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:86 +0x135\nnet.(*Resolver).exchange(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0006041f0, 0xe, 0x74656e726562756b, 0x75616665642e7365, 0x632e6376732e746c, 0x6c2e72657473756c, 0x2e6c61636f, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:165 +0x4a8\nnet.(*Resolver).tryOneName(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0005f4140, 0xc0003c9200, 0x25, 0xc000300001, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:255 +0x347\nnet.(*Resolver).goLookupIPCNAMEOrder.func3.1(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0005f4140, 0xc0003c9200, 0x25, 0xc000332300, 0xc000630001)\n\t/usr/local/go/src/net/dnsclient_unix.go:601 +0xbb\ncreated by net.(*Resolver).goLookupIPCNAMEOrder.func3\n\t/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8\n\ngoroutine 124 [IO wait]:\ninternal/poll.runtime_pollWait(0x7efceafca3f0, 0x72, 0xffffffffffffffff)\n\t/usr/local/go/src/runtime/netpoll.go:222 +0x55\ninternal/poll.(*pollDesc).wait(0xc00050ec18, 0x72, 0x200, 0x200, 0xffffffffffffffff)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45\ninternal/poll.(*pollDesc).waitRead(...)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:92\ninternal/poll.(*FD).Read(0xc00050ec00, 0xc0007e0800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5\nnet.(*netFD).Read(0xc00050ec00, 0xc0007e0800, 0x200, 0x200, 0x0, 0xc00066e3e0, 0x450c8c)\n\t/usr/local/go/src/net/fd_posix.go:55 +0x4f\nnet.(*conn).Read(0xc00000e038, 0xc0007e0800, 0x200, 0x200, 0x0, 0x0, 0x0)\n\t/usr/local/go/src/net/net.go:183 +0x91\nnet.dnsPacketRoundTrip(0x20dfa40, 0xc00000e038, 0x6e726562756b5b61, 0x6665642e73657465, 0x6376732e746c7561, 0x72657473756c632e, 0x2e6c61636f6c2e, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:86 +0x135\nnet.(*Resolver).exchange(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0006041f0, 0xe, 0x74656e726562756b, 0x75616665642e7365, 0x632e6376732e746c, 0x6c2e72657473756c, 0x2e6c61636f, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:165 +0x4a8\nnet.(*Resolver).tryOneName(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0005f4140, 0xc0003c9200, 0x25, 0x1c, 0x0, 0x0, 0x0, ...)\n\t/usr/local/go/src/net/dnsclient_unix.go:255 +0x347\nnet.(*Resolver).goLookupIPCNAMEOrder.func3.1(0x307d3e0, 0x20cb9a0, 0xc0003f0cc0, 0xc0005f4140, 0xc0003c9200, 0x25, 0xc000332300, 0xc00063001c)\n\t/usr/local/go/src/net/dnsclient_unix.go:601 +0xbb\ncreated by net.(*Resolver).goLookupIPCNAMEOrder.func3\n\t/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8\n\nstderr:\n+ /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'\ncommand terminated with exit code 255\n\nerror:\nexit status 255",
            },
            Code: 255,
        }
        error running /tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9962 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1:
        Command stdout:
        I0705 09:31:09.898445     242 loader.go:372] Config loaded from file:  /tmp/icc-override.kubeconfig
        I0705 09:31:39.900833     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
        I0705 09:31:39.900913     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0705 09:32:09.902737     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30001 milliseconds
        I0705 09:32:09.902812     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0705 09:32:09.902843     242 shortcut.go:89] Error loading discovery information: Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0705 09:32:39.903603     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
        I0705 09:32:39.903679     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0705 09:33:09.904383     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
        I0705 09:33:09.904542     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0705 09:33:39.905595     242 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s  in 30000 milliseconds
        I0705 09:33:39.905690     242 cached_discovery.go:121] skipped caching discovery info due to Get "https://kubernetes.default.svc:443/api?timeout=32s": dial tcp: i/o timeout
        I0705 09:33:39.905764     242 helpers.go:235] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: i/o timeout
        F0705 09:33:39.905786     242 helpers.go:116] Unable to connect to the server: dial tcp: i/o timeout
        goroutine 1 [running]:
        k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc0003c0000, 0x65, 0xb7)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
        k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307ec20, 0xc000000003, 0x0, 0x0, 0xc000198000, 0x2, 0x2610819, 0xa, 0x74, 0x40e300)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
        k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x307ec20, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc00004cc00, 0x1, 0x1)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
        k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
        k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00014a500, 0x36, 0x1)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
        k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2094d80, 0xc000569da0, 0x1f1b498)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935
        k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
        	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
        k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000434500, 0xc000568fc0, 0x1, 0x3)
... skipping 88 lines ...
        	/usr/local/go/src/net/dnsclient_unix.go:600 +0xd8
        
        stderr:
        + /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'
        command terminated with exit code 255
        
        error:
        exit status 255
    occurred

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1102
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":25,"skipped":206,"failed":4,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-cli] Kubectl client Simple pod should handle in-cluster config"]}
Jul  5 09:33:42.009: INFO: Running AfterSuite actions on all nodes
Jul  5 09:33:42.009: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:33:42.009: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:33:42.009: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:33:42.009: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:33:42.010: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 17 lines ...
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  5 09:21:27.975: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:21:58.004: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:21:58.004: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:22:33.035: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:23:03.066: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:23:03.066: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:23:38.047: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:24:08.076: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:24:08.076: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:24:43.036: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:25:13.065: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:25:13.065: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:25:48.034: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:26:18.063: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:26:18.063: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:26:53.039: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:27:23.070: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:27:23.070: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:27:58.035: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:28:28.064: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:28:28.064: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:29:03.035: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:29:33.066: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:29:33.066: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:30:08.033: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:30:38.063: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:30:38.063: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:31:13.036: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:31:43.067: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:31:43.067: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:32:18.035: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:32:48.065: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:32:48.065: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:33:18.095: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:33:48.125: INFO: Unable to read jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local from pod dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f: the server is currently unable to handle the request (get pods dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f)
Jul  5 09:33:48.126: INFO: Lookups using dns-5460/dns-test-3e2afd22-864a-435a-ab50-e9e754db2d0f failed for: [wheezy_udp@dns-test-service-3.dns-5460.svc.cluster.local jessie_udp@dns-test-service-3.dns-5460.svc.cluster.local]

Jul  5 09:33:48.126: FAIL: Unexpected error:
    <*errors.errorString | 0xc000250250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 213 lines ...
• Failure [782.155 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:33:48.126: Unexpected error:
      <*errors.errorString | 0xc000250250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":18,"skipped":99,"failed":3,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
Jul  5 09:33:49.822: INFO: Running AfterSuite actions on all nodes
Jul  5 09:33:49.822: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:33:49.822: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:33:49.822: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:33:49.822: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:33:49.822: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 29 lines ...
Jul  5 09:28:51.111: INFO: PersistentVolume nfs-m5jzt found and phase=Bound (29.427934ms)
Jul  5 09:28:51.142: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-kfhxh] to have phase Bound
Jul  5 09:28:51.172: INFO: PersistentVolumeClaim pvc-kfhxh found and phase=Bound (30.054471ms)
STEP: Checking pod has write access to PersistentVolumes
Jul  5 09:28:51.203: INFO: Creating nfs test pod
Jul  5 09:28:51.238: INFO: Pod should terminate with exitcode 0 (success)
Jul  5 09:28:51.238: INFO: Waiting up to 5m0s for pod "pvc-tester-hntwb" in namespace "pv-782" to be "Succeeded or Failed"
Jul  5 09:28:51.269: INFO: Pod "pvc-tester-hntwb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.747986ms
Jul  5 09:28:53.299: INFO: Pod "pvc-tester-hntwb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061382689s
Jul  5 09:28:55.330: INFO: Pod "pvc-tester-hntwb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092097947s
Jul  5 09:28:57.361: INFO: Pod "pvc-tester-hntwb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123080639s
Jul  5 09:28:59.390: INFO: Pod "pvc-tester-hntwb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151925196s
Jul  5 09:29:01.420: INFO: Pod "pvc-tester-hntwb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.18213345s
... skipping 138 lines ...
Jul  5 09:33:43.766: INFO: Pod "pvc-tester-hntwb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.528152103s
Jul  5 09:33:45.796: INFO: Pod "pvc-tester-hntwb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.558205607s
Jul  5 09:33:47.827: INFO: Pod "pvc-tester-hntwb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.588837786s
Jul  5 09:33:49.858: INFO: Pod "pvc-tester-hntwb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.619805239s
Jul  5 09:33:51.859: INFO: Deleting pod "pvc-tester-hntwb" in namespace "pv-782"
Jul  5 09:33:51.892: INFO: Wait up to 5m0s for pod "pvc-tester-hntwb" to be fully deleted
Jul  5 09:34:01.953: FAIL: Unexpected error:
    <*errors.errorString | 0xc002f5cee0>: {
        s: "pod \"pvc-tester-hntwb\" did not exit with Success: pod \"pvc-tester-hntwb\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-hntwb\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-hntwb" did not exit with Success: pod "pvc-tester-hntwb" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-hntwb" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.4.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:238 +0x371
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000229080)
... skipping 26 lines ...
Jul  5 09:34:14.262: INFO: At 2021-07-05 09:28:47 +0000 UTC - event for nfs-server: {kubelet ip-172-20-38-136.us-east-2.compute.internal} Created: Created container nfs-server
Jul  5 09:34:14.262: INFO: At 2021-07-05 09:28:47 +0000 UTC - event for nfs-server: {kubelet ip-172-20-38-136.us-east-2.compute.internal} Started: Started container nfs-server
Jul  5 09:34:14.262: INFO: At 2021-07-05 09:28:50 +0000 UTC - event for pvc-25mxm: {persistentvolume-controller } FailedBinding: no persistent volumes available for this claim and no storage class is set
Jul  5 09:34:14.262: INFO: At 2021-07-05 09:28:50 +0000 UTC - event for pvc-mq4tw: {persistentvolume-controller } FailedBinding: no persistent volumes available for this claim and no storage class is set
Jul  5 09:34:14.262: INFO: At 2021-07-05 09:28:51 +0000 UTC - event for pvc-tester-hntwb: {default-scheduler } Scheduled: Successfully assigned pv-782/pvc-tester-hntwb to ip-172-20-52-221.us-east-2.compute.internal
Jul  5 09:34:14.262: INFO: At 2021-07-05 09:30:54 +0000 UTC - event for pvc-tester-hntwb: {kubelet ip-172-20-52-221.us-east-2.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-xtxkf]: timed out waiting for the condition
Jul  5 09:34:14.262: INFO: At 2021-07-05 09:31:53 +0000 UTC - event for pvc-tester-hntwb: {kubelet ip-172-20-52-221.us-east-2.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-hqgd2" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.3.51:/exports /var/lib/kubelet/pods/b2f51ab4-d390-45c1-804d-17cbbaa61bd8/volumes/kubernetes.io~nfs/nfs-hqgd2
Output: mount.nfs: Connection timed out

Jul  5 09:34:14.262: INFO: At 2021-07-05 09:34:02 +0000 UTC - event for nfs-server: {kubelet ip-172-20-38-136.us-east-2.compute.internal} Killing: Stopping container nfs-server
Jul  5 09:34:14.293: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 168 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 2 PVs and 4 PVCs: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:233

      Jul  5 09:34:01.953: Unexpected error:
          <*errors.errorString | 0xc002f5cee0>: {
              s: "pod \"pvc-tester-hntwb\" did not exit with Success: pod \"pvc-tester-hntwb\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-hntwb\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-hntwb" did not exit with Success: pod "pvc-tester-hntwb" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-hntwb" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:238
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":46,"skipped":363,"failed":1,"failures":["[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 09:29:51.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
Jul  5 09:32:00.135: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3585.svc.cluster.local from pod dns-3585/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b: the server is currently unable to handle the request (get pods dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b)
Jul  5 09:32:30.165: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3585/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b: the server is currently unable to handle the request (get pods dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b)
Jul  5 09:33:00.196: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3585/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b: the server is currently unable to handle the request (get pods dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b)
Jul  5 09:33:30.227: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3585.svc.cluster.local from pod dns-3585/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b: the server is currently unable to handle the request (get pods dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b)
Jul  5 09:34:00.260: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3585.svc.cluster.local from pod dns-3585/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b: the server is currently unable to handle the request (get pods dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b)
Jul  5 09:34:30.294: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3585.svc.cluster.local from pod dns-3585/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b: the server is currently unable to handle the request (get pods dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b)
Jul  5 09:35:00.013: FAIL: Unable to read jessie_tcp@dns-test-service-2.dns-3585.svc.cluster.local from pod dns-3585/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-3585/pods/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b/proxy/results/jessie_tcp@dns-test-service-2.dns-3585.svc.cluster.local": context deadline exceeded

Full Stack Trace
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7ff06ea593c8, 0x18, 0xc0034cab40)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc00005e058, 0xc00322da40, 0x29e9900, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b
testing.tRunner(0xc000876300, 0x71cf618)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
E0705 09:35:00.014534   12630 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jul  5 09:35:00.013: Unable to read jessie_tcp@dns-test-service-2.dns-3585.svc.cluster.local from pod dns-3585/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b: Get \"https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-3585/pods/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b/proxy/results/jessie_tcp@dns-test-service-2.dns-3585.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7ff06ea593c8, 0x18, 0xc0034cab40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc00005e058, 0xc00322da40, 0x29e9900, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x780f3c8, 0xc00005e058, 0xc0034cab01, 0xc0034cab40, 0xc00322da40, 0x67ba9a0, 0xc00322da40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x780f3c8, 0xc00005e058, 0x12a05f200, 0x8bb2c97000, 0xc00322da40, 0x6cf83e0, 0x24f8401)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc002941dc0, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003bb9800, 0xc, 0x10, 0x6fb5f5e, 0x7, 0xc0036bcc00, 0x78a18a8, 0xc0022fa420, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000cbc420, 0xc0036bcc00, 0xc003bb9800, 0xc, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.8()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:322 +0xb2f\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000876300)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000876300)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b\ntesting.tRunner(0xc000876300, 0x71cf618)\n\t/usr/local/go/src/testing/testing.go:1193 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1238 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6b4ac20, 0xc0034b0440)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x6b4ac20, 0xc0034b0440)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000296ea0, 0x189, 0x87cadfb, 0x7d, 0xd9, 0xc003201800, 0xa8a)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x628e540, 0x76c5570)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc000296ea0, 0x189, 0xc001967648, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000296ea0, 0x189, 0xc001967730, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x7059d05, 0x24, 0xc001967990, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7ff06ea593c8, 0x18, 0xc0034cab40)
... skipping 231 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 09:35:00.013: Unable to read jessie_tcp@dns-test-service-2.dns-3585.svc.cluster.local from pod dns-3585/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-3585/pods/dns-test-9e49810d-47fc-4b91-abd4-55949f80e46b/proxy/results/jessie_tcp@dns-test-service-2.dns-3585.svc.cluster.local": context deadline exceeded

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":46,"skipped":363,"failed":2,"failures":["[sig-network] Services should serve multiport endpoints from pods  [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
Jul  5 09:35:01.858: INFO: Running AfterSuite actions on all nodes
Jul  5 09:35:01.858: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:35:01.858: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:35:01.858: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:35:01.858: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:35:01.858: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 12 lines ...
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0705 09:30:48.798610   12493 metrics_grabber.go:115] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
W0705 09:30:48.798700   12493 metrics_grabber.go:118] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul  5 09:35:48.859: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 09:35:48.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2290" for this suite.


• [SLOW TEST:306.506 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":25,"skipped":196,"failed":4,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
Jul  5 09:35:48.928: INFO: Running AfterSuite actions on all nodes
Jul  5 09:35:48.928: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:35:48.928: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:35:48.928: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:35:48.928: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:35:48.928: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 43 lines ...
Jul  5 09:34:36.882: INFO: Running '/tmp/kubectl1688043009/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9634 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.66.5.36:80 2>&1 || true; echo; done'
Jul  5 09:36:09.495: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.5.36:80\n+ true\n+ echo\n"
Jul  5 09:36:09.496: INFO: stdout: "wget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nup-down-1-pdw5w\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nup-down-1-pdw5w\nup-down-1-pdw5w\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-pdw5w\nup-down-1-pdw5w\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\n"
Jul  5 09:36:09.496: INFO: Unable to reach the following endpoints of service 100.66.5.36: map[up-down-1-459nh:{} up-down-1-xqdh5:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9634
STEP: Deleting pod verify-service-up-exec-pod-zg265 in namespace services-9634
Jul  5 09:36:14.569: FAIL: Unexpected error:
    <*errors.errorString | 0xc003a7e300>: {
        s: "service verification failed for: 100.66.5.36\nexpected [up-down-1-459nh up-down-1-pdw5w up-down-1-xqdh5]\nreceived [up-down-1-pdw5w wget: download timed out]",
    }
    service verification failed for: 100.66.5.36
    expected [up-down-1-459nh up-down-1-pdw5w up-down-1-xqdh5]
    received [up-down-1-pdw5w wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.8()
... skipping 217 lines ...
• Failure [330.136 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1033

  Jul  5 09:36:14.569: Unexpected error:
      <*errors.errorString | 0xc003a7e300>: {
          s: "service verification failed for: 100.66.5.36\nexpected [up-down-1-459nh up-down-1-pdw5w up-down-1-xqdh5]\nreceived [up-down-1-pdw5w wget: download timed out]",
      }
      service verification failed for: 100.66.5.36
      expected [up-down-1-459nh up-down-1-pdw5w up-down-1-xqdh5]
      received [up-down-1-pdw5w wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1049
------------------------------
{"msg":"FAILED [sig-network] Services should be able to up and down services","total":-1,"completed":45,"skipped":378,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","[sig-network] Services should be able to up and down services"]}
Jul  5 09:36:16.171: INFO: Running AfterSuite actions on all nodes
Jul  5 09:36:16.171: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:36:16.171: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:36:16.171: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:36:16.171: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:36:16.171: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 34 lines ...
Jul  5 09:31:13.472: INFO: PersistentVolume nfs-glq7h found and phase=Bound (29.640454ms)
Jul  5 09:31:13.501: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-2bz9f] to have phase Bound
Jul  5 09:31:13.530: INFO: PersistentVolumeClaim pvc-2bz9f found and phase=Bound (29.30892ms)
STEP: Checking pod has write access to PersistentVolumes
Jul  5 09:31:13.559: INFO: Creating nfs test pod
Jul  5 09:31:13.592: INFO: Pod should terminate with exitcode 0 (success)
Jul  5 09:31:13.592: INFO: Waiting up to 5m0s for pod "pvc-tester-7cx26" in namespace "pv-452" to be "Succeeded or Failed"
Jul  5 09:31:13.621: INFO: Pod "pvc-tester-7cx26": Phase="Pending", Reason="", readiness=false. Elapsed: 29.024261ms
Jul  5 09:31:15.653: INFO: Pod "pvc-tester-7cx26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060892477s
Jul  5 09:31:17.683: INFO: Pod "pvc-tester-7cx26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091191115s
Jul  5 09:31:19.713: INFO: Pod "pvc-tester-7cx26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121468808s
Jul  5 09:31:21.743: INFO: Pod "pvc-tester-7cx26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151004876s
Jul  5 09:31:23.774: INFO: Pod "pvc-tester-7cx26": Phase="Pending", Reason="", readiness=false. Elapsed: 10.181758847s
... skipping 138 lines ...
Jul  5 09:36:05.990: INFO: Pod "pvc-tester-7cx26": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.398191845s
Jul  5 09:36:08.021: INFO: Pod "pvc-tester-7cx26": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.429109612s
Jul  5 09:36:10.051: INFO: Pod "pvc-tester-7cx26": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.4589793s
Jul  5 09:36:12.081: INFO: Pod "pvc-tester-7cx26": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.489177341s
Jul  5 09:36:14.081: INFO: Deleting pod "pvc-tester-7cx26" in namespace "pv-452"
Jul  5 09:36:14.113: INFO: Wait up to 5m0s for pod "pvc-tester-7cx26" to be fully deleted
Jul  5 09:36:24.173: FAIL: Unexpected error:
    <*errors.errorString | 0xc003ae0ad0>: {
        s: "pod \"pvc-tester-7cx26\" did not exit with Success: pod \"pvc-tester-7cx26\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-7cx26\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-7cx26" did not exit with Success: pod "pvc-tester-7cx26" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-7cx26" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.4.3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:248 +0x371
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0000eb980)
... skipping 24 lines ...
Jul  5 09:36:30.476: INFO: At 2021-07-05 09:31:10 +0000 UTC - event for nfs-server: {default-scheduler } Scheduled: Successfully assigned pv-452/nfs-server to ip-172-20-38-136.us-east-2.compute.internal
Jul  5 09:36:30.476: INFO: At 2021-07-05 09:31:11 +0000 UTC - event for nfs-server: {kubelet ip-172-20-38-136.us-east-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/volume/nfs:1.2" already present on machine
Jul  5 09:36:30.476: INFO: At 2021-07-05 09:31:11 +0000 UTC - event for nfs-server: {kubelet ip-172-20-38-136.us-east-2.compute.internal} Created: Created container nfs-server
Jul  5 09:36:30.476: INFO: At 2021-07-05 09:31:11 +0000 UTC - event for nfs-server: {kubelet ip-172-20-38-136.us-east-2.compute.internal} Started: Started container nfs-server
Jul  5 09:36:30.476: INFO: At 2021-07-05 09:31:13 +0000 UTC - event for pvc-tester-7cx26: {default-scheduler } Scheduled: Successfully assigned pv-452/pvc-tester-7cx26 to ip-172-20-55-216.us-east-2.compute.internal
Jul  5 09:36:30.476: INFO: At 2021-07-05 09:33:16 +0000 UTC - event for pvc-tester-7cx26: {kubelet ip-172-20-55-216.us-east-2.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-f7d6b]: timed out waiting for the condition
Jul  5 09:36:30.476: INFO: At 2021-07-05 09:34:14 +0000 UTC - event for pvc-tester-7cx26: {kubelet ip-172-20-55-216.us-east-2.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-glq7h" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.3.77:/exports /var/lib/kubelet/pods/102cd6b2-10b7-4cd6-af5e-ceebd6c1df36/volumes/kubernetes.io~nfs/nfs-glq7h
Output: mount.nfs: Connection timed out

Jul  5 09:36:30.476: INFO: At 2021-07-05 09:35:31 +0000 UTC - event for pvc-tester-7cx26: {kubelet ip-172-20-55-216.us-east-2.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[kube-api-access-f7d6b volume1]: timed out waiting for the condition
Jul  5 09:36:30.476: INFO: At 2021-07-05 09:36:24 +0000 UTC - event for nfs-server: {kubelet ip-172-20-38-136.us-east-2.compute.internal} Killing: Stopping container nfs-server
... skipping 155 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 3 PVs and 3 PVCs: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:243

      Jul  5 09:36:24.173: Unexpected error:
          <*errors.errorString | 0xc003ae0ad0>: {
              s: "pod \"pvc-tester-7cx26\" did not exit with Success: pod \"pvc-tester-7cx26\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-7cx26\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-7cx26" did not exit with Success: pod "pvc-tester-7cx26" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-7cx26" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:248
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","total":-1,"completed":19,"skipped":146,"failed":4,"failures":["[sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}
Jul  5 09:36:32.068: INFO: Running AfterSuite actions on all nodes
Jul  5 09:36:32.068: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 09:36:32.068: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 09:36:32.068: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 09:36:32.068: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 09:36:32.068: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 1614 lines ...
FailureDerror trying to reach service: dial tcp 100.96.4.49:162: i... (503; 30.063297321s)
Jul  5 09:36:31.827: INFO: (19) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-j78w8-v5mqz:462/proxy/: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.49:462: i... (503; 30.063199218s)
Jul  5 09:36:31.858: INFO: Pod proxy-service-j78w8-v5mqz has the following error logs: 
Jul  5 09:36:31.859: FAIL: 0 (503; 30.033158767s): path /api/v1/namespaces/proxy-6385/pods/http:proxy-service-j78w8-v5mqz:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.035832081s): path /api/v1/namespaces/proxy-6385/pods/proxy-service-j78w8-v5mqz:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.03589418s): path /api/v1/namespaces/proxy-6385/pods/https:proxy-service-j78w8-v5mqz:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036246187s): path /api/v1/namespaces/proxy-6385/pods/proxy-service-j78w8-v5mqz:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036188708s): path /api/v1/namespaces/proxy-6385/services/proxy-service-j78w8:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036224847s): path /api/v1/namespaces/proxy-6385/pods/proxy-service-j78w8-v5mqz/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036513495s): path /api/v1/namespaces/proxy-6385/services/http:proxy-service-j78w8:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036319552s): path /api/v1/namespaces/proxy-6385/services/proxy-service-j78w8:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036358009s): path /api/v1/namespaces/proxy-6385/services/https:proxy-service-j78w8:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036290759s): path /api/v1/namespaces/proxy-6385/services/https:proxy-service-j78w8:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.060664825s): path /api/v1/namespaces/proxy-6385/services/http:proxy-service-j78w8:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.061062214s): path /api/v1/namespaces/proxy-6385/pods/proxy-service-j78w8-v5mqz:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.067078092s): path /api/v1/namespaces/proxy-6385/pods/https:proxy-service-j78w8-v5mqz:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.067310058s): path /api/v1/namespaces/proxy-6385/pods/https:proxy-service-j78w8-v5mqz:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.067201413s): path /api/v1/namespaces/proxy-6385/pods/http:proxy-service-j78w8-v5mqz:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.067331121s): path /api/v1/namespaces/proxy-6385/pods/http:proxy-service-j78w8-v5mqz:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.032722595s): path /api/v1/namespaces/proxy-6385/pods/proxy-service-j78w8-v5mqz/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.032708942s): path /api/v1/namespaces/proxy-6385/pods/http:proxy-service-j78w8-v5mqz:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.035404381s): path /api/v1/namespaces/proxy-6385/pods/https:proxy-service-j78w8-v5mqz:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.035499272s): path /api/v1/namespaces/proxy-6385/pods/http:proxy-service-j78w8-v5mqz:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.035357649s): path /api/v1/namespaces/proxy-6385/pods/https:proxy-service-j78w8-v5mqz:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.035511401s): path /api/v1/namespaces/proxy-6385/pods/proxy-service-j78w8-v5mqz:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.035961597s): path /api/v1/namespaces/proxy-6385/services/http:proxy-service-j78w8:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle t