This job view page is being replaced by Spyglass soon. Check out the new job view.
PRolemarkus: Enable IRSA for CCM
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-07-09 09:59
Elapsed51m39s
Revision534f6efbc22411379c9b16572af9377ed8c19d80
Refs 11818

No Test Failures!


Error lines from build-log.txt

... skipping 493 lines ...
I0709 10:04:16.315274    4243 dumplogs.go:38] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops toolbox dump --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0709 10:04:16.332140   11704 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0709 10:04:16.332686   11704 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0709 10:04:16.332708   11704 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
W0709 10:04:16.813718    4243 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0709 10:04:16.813786    4243 down.go:48] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops delete cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --yes
I0709 10:04:16.830367   11715 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0709 10:04:16.830729   11715 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0709 10:04:16.830797   11715 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
I0709 10:04:17.293673    4243 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/07/09 10:04:17 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0709 10:04:17.302760    4243 http.go:37] curl https://ip.jsb.workers.dev
I0709 10:04:17.408109    4243 up.go:144] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.0-beta.1 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210621 --channel=alpha --networking=kubenet --container-runtime=containerd --override=cluster.spec.cloudControllerManager.cloudProvider=aws --override=cluster.spec.serviceAccountIssuerDiscovery.discoveryStore=s3://k8s-kops-prow/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery --override=cluster.spec.serviceAccountIssuerDiscovery.enableAWSOIDCProvider=true --admin-access 34.122.95.176/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-1a --master-size c5.large
I0709 10:04:17.424375   11725 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0709 10:04:17.424826   11725 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0709 10:04:17.424837   11725 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
I0709 10:04:17.470055   11725 create_cluster.go:792] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 33 lines ...
I0709 10:04:41.259563    4243 up.go:181] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops validate cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0709 10:04:41.274899   11745 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0709 10:04:41.274987   11745 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0709 10:04:41.274992   11745 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
Validating cluster e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io

W0709 10:04:42.323021   11745 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:04:52.356332   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:05:02.391063   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:05:12.426217   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:05:22.456353   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:05:32.491710   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:05:42.526625   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:05:52.556137   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:06:02.590745   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:06:12.625873   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:06:22.658149   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:06:32.703707   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:06:42.737449   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:06:52.769299   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:07:02.798344   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:07:12.843612   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:07:22.874650   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:07:32.909793   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:07:42.954854   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:07:52.982998   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:08:03.016372   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0709 10:08:13.050278   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 10 lines ...
Node	ip-172-20-35-137.us-west-1.compute.internal	master "ip-172-20-35-137.us-west-1.compute.internal" is missing kube-controller-manager pod
Node	ip-172-20-35-137.us-west-1.compute.internal	master "ip-172-20-35-137.us-west-1.compute.internal" is missing kube-scheduler pod
Pod	kube-system/coredns-autoscaler-6f594f4c58-f84kr	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-f84kr" is pending
Pod	kube-system/coredns-f45c4bf76-vl9nx		system-cluster-critical pod "coredns-f45c4bf76-vl9nx" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-kggbl	system-cluster-critical pod "ebs-csi-controller-566c97f85c-kggbl" is pending

Validation Failed
W0709 10:08:24.896358   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 11 lines ...
Pod	kube-system/coredns-autoscaler-6f594f4c58-f84kr	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-f84kr" is pending
Pod	kube-system/coredns-f45c4bf76-vl9nx		system-cluster-critical pod "coredns-f45c4bf76-vl9nx" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-kggbl	system-cluster-critical pod "ebs-csi-controller-566c97f85c-kggbl" is pending
Pod	kube-system/ebs-csi-node-clqw7			system-node-critical pod "ebs-csi-node-clqw7" is pending
Pod	kube-system/ebs-csi-node-rncc7			system-node-critical pod "ebs-csi-node-rncc7" is pending

Validation Failed
W0709 10:08:36.156113   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 9 lines ...
Machine	i-0626b22a09f5992cc				machine "i-0626b22a09f5992cc" has not yet joined cluster
Pod	kube-system/ebs-csi-controller-566c97f85c-kggbl	system-cluster-critical pod "ebs-csi-controller-566c97f85c-kggbl" is pending
Pod	kube-system/ebs-csi-node-clqw7			system-node-critical pod "ebs-csi-node-clqw7" is pending
Pod	kube-system/ebs-csi-node-gvxtb			system-node-critical pod "ebs-csi-node-gvxtb" is pending
Pod	kube-system/ebs-csi-node-rncc7			system-node-critical pod "ebs-csi-node-rncc7" is pending

Validation Failed
W0709 10:08:47.424617   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 5 lines ...
ip-172-20-55-238.us-west-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME			MESSAGE
Machine	i-0626b22a09f5992cc	machine "i-0626b22a09f5992cc" has not yet joined cluster

Validation Failed
W0709 10:08:58.707192   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 6 lines ...
ip-172-20-55-238.us-west-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-54-0.us-west-1.compute.internal	node "ip-172-20-54-0.us-west-1.compute.internal" of role "node" is not ready

Validation Failed
W0709 10:09:09.891491   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 6 lines ...
ip-172-20-55-238.us-west-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-42-78.us-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-42-78.us-west-1.compute.internal" is pending

Validation Failed
W0709 10:09:21.102143   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 6 lines ...
ip-172-20-55-238.us-west-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-48-135.us-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-48-135.us-west-1.compute.internal" is pending

Validation Failed
W0709 10:09:32.294497   11745 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 410 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 506 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:11:57.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-7621" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:11:57.747: INFO: Only supported for providers [vsphere] (not aws)
... skipping 95 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 49 lines ...
STEP: Destroying namespace "pod-disks-8872" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.705 seconds]
[sig-storage] Pod Disks
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449

  Requires at least 2 nodes (not 0)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75
------------------------------
... skipping 33 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:11:59.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-8310" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:11:59.226: INFO: Only supported for providers [gce gke] (not aws)
... skipping 125 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:11:59.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4501" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:00.025: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 46 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 48 lines ...
STEP: Creating a kubernetes client
Jul  9 10:11:58.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[It] should ignore not found error with --for=delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1836
STEP: calling kubectl wait --for=delete
Jul  9 10:12:00.365: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4756 wait --for=delete pod/doesnotexist'
Jul  9 10:12:00.629: INFO: stderr: ""
Jul  9 10:12:00.629: INFO: stdout: ""
Jul  9 10:12:00.629: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4756 wait --for=delete pod --selector=app.kubernetes.io/name=noexist'
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:12:00.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4756" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete","total":-1,"completed":1,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 75 lines ...
W0709 10:11:57.048687   12330 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul  9 10:11:57.048: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
STEP: Creating a pod to test downward api env vars
Jul  9 10:11:57.200: INFO: Waiting up to 5m0s for pod "downward-api-c505c79a-1e8a-4e6d-b415-998131f1535a" in namespace "downward-api-727" to be "Succeeded or Failed"
Jul  9 10:11:57.251: INFO: Pod "downward-api-c505c79a-1e8a-4e6d-b415-998131f1535a": Phase="Pending", Reason="", readiness=false. Elapsed: 50.778474ms
Jul  9 10:11:59.304: INFO: Pod "downward-api-c505c79a-1e8a-4e6d-b415-998131f1535a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104311172s
Jul  9 10:12:01.355: INFO: Pod "downward-api-c505c79a-1e8a-4e6d-b415-998131f1535a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155367832s
Jul  9 10:12:03.406: INFO: Pod "downward-api-c505c79a-1e8a-4e6d-b415-998131f1535a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206305974s
STEP: Saw pod success
Jul  9 10:12:03.406: INFO: Pod "downward-api-c505c79a-1e8a-4e6d-b415-998131f1535a" satisfied condition "Succeeded or Failed"
Jul  9 10:12:03.456: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod downward-api-c505c79a-1e8a-4e6d-b415-998131f1535a container dapi-container: <nil>
STEP: delete the pod
Jul  9 10:12:03.873: INFO: Waiting for pod downward-api-c505c79a-1e8a-4e6d-b415-998131f1535a to disappear
Jul  9 10:12:03.925: INFO: Pod downward-api-c505c79a-1e8a-4e6d-b415-998131f1535a no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.244 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:04.096: INFO: Only supported for providers [openstack] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  9 10:12:00.106: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6774023e-914f-4833-8a02-82fead65ea87" in namespace "projected-4428" to be "Succeeded or Failed"
Jul  9 10:12:00.159: INFO: Pod "downwardapi-volume-6774023e-914f-4833-8a02-82fead65ea87": Phase="Pending", Reason="", readiness=false. Elapsed: 53.838664ms
Jul  9 10:12:02.213: INFO: Pod "downwardapi-volume-6774023e-914f-4833-8a02-82fead65ea87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107515514s
Jul  9 10:12:04.264: INFO: Pod "downwardapi-volume-6774023e-914f-4833-8a02-82fead65ea87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158759233s
STEP: Saw pod success
Jul  9 10:12:04.264: INFO: Pod "downwardapi-volume-6774023e-914f-4833-8a02-82fead65ea87" satisfied condition "Succeeded or Failed"
Jul  9 10:12:04.314: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod downwardapi-volume-6774023e-914f-4833-8a02-82fead65ea87 container client-container: <nil>
STEP: delete the pod
Jul  9 10:12:04.418: INFO: Waiting for pod downwardapi-volume-6774023e-914f-4833-8a02-82fead65ea87 to disappear
Jul  9 10:12:04.468: INFO: Pod downwardapi-volume-6774023e-914f-4833-8a02-82fead65ea87 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.530 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:04.582: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 27 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  9 10:11:57.205: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9847bfa-af15-4bd1-b4a2-d7c83f60cd29" in namespace "downward-api-2304" to be "Succeeded or Failed"
Jul  9 10:11:57.256: INFO: Pod "downwardapi-volume-b9847bfa-af15-4bd1-b4a2-d7c83f60cd29": Phase="Pending", Reason="", readiness=false. Elapsed: 51.303546ms
Jul  9 10:11:59.312: INFO: Pod "downwardapi-volume-b9847bfa-af15-4bd1-b4a2-d7c83f60cd29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106842465s
Jul  9 10:12:01.364: INFO: Pod "downwardapi-volume-b9847bfa-af15-4bd1-b4a2-d7c83f60cd29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159265891s
Jul  9 10:12:03.417: INFO: Pod "downwardapi-volume-b9847bfa-af15-4bd1-b4a2-d7c83f60cd29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211692496s
Jul  9 10:12:05.470: INFO: Pod "downwardapi-volume-b9847bfa-af15-4bd1-b4a2-d7c83f60cd29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.26465937s
STEP: Saw pod success
Jul  9 10:12:05.470: INFO: Pod "downwardapi-volume-b9847bfa-af15-4bd1-b4a2-d7c83f60cd29" satisfied condition "Succeeded or Failed"
Jul  9 10:12:05.522: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod downwardapi-volume-b9847bfa-af15-4bd1-b4a2-d7c83f60cd29 container client-container: <nil>
STEP: delete the pod
Jul  9 10:12:05.889: INFO: Waiting for pod downwardapi-volume-b9847bfa-af15-4bd1-b4a2-d7c83f60cd29 to disappear
Jul  9 10:12:05.940: INFO: Pod downwardapi-volume-b9847bfa-af15-4bd1-b4a2-d7c83f60cd29 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 40 lines ...
• [SLOW TEST:10.503 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:07.394: INFO: Only supported for providers [vsphere] (not aws)
... skipping 73 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470
    should add annotations for pods in rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:08.609: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 91 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1233
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
Jul  9 10:12:09.040: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.411 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 75 lines ...
[BeforeEach] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:480
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:484
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Jul  9 10:11:59.279: INFO: Waiting up to 5m0s for pod "pod-always-succeed97612ed1-f035-4614-ba68-736d7c6c36bc" in namespace "pods-9390" to be "Succeeded or Failed"
Jul  9 10:11:59.337: INFO: Pod "pod-always-succeed97612ed1-f035-4614-ba68-736d7c6c36bc": Phase="Pending", Reason="", readiness=false. Elapsed: 57.890675ms
Jul  9 10:12:01.388: INFO: Pod "pod-always-succeed97612ed1-f035-4614-ba68-736d7c6c36bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108368391s
Jul  9 10:12:03.439: INFO: Pod "pod-always-succeed97612ed1-f035-4614-ba68-736d7c6c36bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159778785s
Jul  9 10:12:05.491: INFO: Pod "pod-always-succeed97612ed1-f035-4614-ba68-736d7c6c36bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211468759s
Jul  9 10:12:07.542: INFO: Pod "pod-always-succeed97612ed1-f035-4614-ba68-736d7c6c36bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.263026318s
STEP: Saw pod success
Jul  9 10:12:07.542: INFO: Pod "pod-always-succeed97612ed1-f035-4614-ba68-736d7c6c36bc" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:12:09.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:478
    should not create extra sandbox if all containers are done
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:484
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:09.813: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
Jul  9 10:12:10.533: INFO: AfterEach: Cleaning up test resources.
Jul  9 10:12:10.533: INFO: pvc is nil
Jul  9 10:12:10.533: INFO: Deleting PersistentVolume "hostpath-58hvq"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:10.594: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
STEP: Creating a pod to test emptydir subpath on tmpfs
Jul  9 10:11:58.246: INFO: Waiting up to 5m0s for pod "pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1" in namespace "emptydir-6940" to be "Succeeded or Failed"
Jul  9 10:11:58.296: INFO: Pod "pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1": Phase="Pending", Reason="", readiness=false. Elapsed: 50.805212ms
Jul  9 10:12:00.347: INFO: Pod "pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101767204s
Jul  9 10:12:02.400: INFO: Pod "pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154189224s
Jul  9 10:12:04.451: INFO: Pod "pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205434755s
Jul  9 10:12:06.503: INFO: Pod "pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257008393s
Jul  9 10:12:08.553: INFO: Pod "pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1": Phase="Running", Reason="", readiness=true. Elapsed: 10.307882219s
Jul  9 10:12:10.606: INFO: Pod "pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.360771455s
STEP: Saw pod success
Jul  9 10:12:10.606: INFO: Pod "pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1" satisfied condition "Succeeded or Failed"
Jul  9 10:12:10.657: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1 container test-container: <nil>
STEP: delete the pod
Jul  9 10:12:10.813: INFO: Waiting for pod pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1 to disappear
Jul  9 10:12:10.863: INFO: Pod pod-847a9425-0cfe-4c9e-b40c-e80391dc21d1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 30 lines ...
• [SLOW TEST:6.656 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:11.279: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 39 lines ...
• [SLOW TEST:13.623 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:11.535: INFO: Only supported for providers [openstack] (not aws)
... skipping 61 lines ...
• [SLOW TEST:12.285 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Deployment should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:19.077 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:16.018: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
• [SLOW TEST:7.082 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":2,"skipped":17,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:5.762 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:17.074: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:12:17.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-6259" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Jul  9 10:12:03.948: INFO: PersistentVolumeClaim pvc-dg689 found but phase is Pending instead of Bound.
Jul  9 10:12:05.999: INFO: PersistentVolumeClaim pvc-dg689 found and phase=Bound (2.102264038s)
Jul  9 10:12:05.999: INFO: Waiting up to 3m0s for PersistentVolume local-7qtkl to have phase Bound
Jul  9 10:12:06.049: INFO: PersistentVolume local-7qtkl found and phase=Bound (50.094528ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lhlm
STEP: Creating a pod to test subpath
Jul  9 10:12:06.202: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lhlm" in namespace "provisioning-8661" to be "Succeeded or Failed"
Jul  9 10:12:06.253: INFO: Pod "pod-subpath-test-preprovisionedpv-lhlm": Phase="Pending", Reason="", readiness=false. Elapsed: 50.333073ms
Jul  9 10:12:08.304: INFO: Pod "pod-subpath-test-preprovisionedpv-lhlm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101502769s
Jul  9 10:12:10.365: INFO: Pod "pod-subpath-test-preprovisionedpv-lhlm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162596018s
Jul  9 10:12:12.420: INFO: Pod "pod-subpath-test-preprovisionedpv-lhlm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.218080313s
Jul  9 10:12:14.473: INFO: Pod "pod-subpath-test-preprovisionedpv-lhlm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270943678s
Jul  9 10:12:16.526: INFO: Pod "pod-subpath-test-preprovisionedpv-lhlm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.323994264s
STEP: Saw pod success
Jul  9 10:12:16.526: INFO: Pod "pod-subpath-test-preprovisionedpv-lhlm" satisfied condition "Succeeded or Failed"
Jul  9 10:12:16.586: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-lhlm container test-container-volume-preprovisionedpv-lhlm: <nil>
STEP: delete the pod
Jul  9 10:12:17.178: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lhlm to disappear
Jul  9 10:12:17.229: INFO: Pod pod-subpath-test-preprovisionedpv-lhlm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lhlm
Jul  9 10:12:17.229: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lhlm" in namespace "provisioning-8661"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:18.082: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 198 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Jul  9 10:12:11.889: INFO: Waiting up to 5m0s for pod "metadata-volume-8500911f-175f-4333-838f-1ea7be75f501" in namespace "downward-api-7380" to be "Succeeded or Failed"
Jul  9 10:12:11.940: INFO: Pod "metadata-volume-8500911f-175f-4333-838f-1ea7be75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 51.606064ms
Jul  9 10:12:13.990: INFO: Pod "metadata-volume-8500911f-175f-4333-838f-1ea7be75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101784635s
Jul  9 10:12:16.041: INFO: Pod "metadata-volume-8500911f-175f-4333-838f-1ea7be75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15236861s
Jul  9 10:12:18.091: INFO: Pod "metadata-volume-8500911f-175f-4333-838f-1ea7be75f501": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.20221063s
STEP: Saw pod success
Jul  9 10:12:18.091: INFO: Pod "metadata-volume-8500911f-175f-4333-838f-1ea7be75f501" satisfied condition "Succeeded or Failed"
Jul  9 10:12:18.141: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod metadata-volume-8500911f-175f-4333-838f-1ea7be75f501 container client-container: <nil>
STEP: delete the pod
Jul  9 10:12:18.250: INFO: Waiting for pod metadata-volume-8500911f-175f-4333-838f-1ea7be75f501 to disappear
Jul  9 10:12:18.300: INFO: Pod metadata-volume-8500911f-175f-4333-838f-1ea7be75f501 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.815 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:18.417: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 20 lines ...
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:12:18.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap that has name configmap-test-emptyKey-ccd0dfaf-4098-4716-894f-913086c0c786
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:12:18.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7935" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":2,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:18.688: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 222 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:12:21.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4969" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":3,"skipped":63,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:22.103: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Jul  9 10:12:16.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul  9 10:12:16.455: INFO: Waiting up to 5m0s for pod "security-context-f52948b2-131f-4bc4-99c6-02bde1b896b4" in namespace "security-context-9304" to be "Succeeded or Failed"
Jul  9 10:12:16.507: INFO: Pod "security-context-f52948b2-131f-4bc4-99c6-02bde1b896b4": Phase="Pending", Reason="", readiness=false. Elapsed: 51.097077ms
Jul  9 10:12:18.559: INFO: Pod "security-context-f52948b2-131f-4bc4-99c6-02bde1b896b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103011106s
Jul  9 10:12:20.611: INFO: Pod "security-context-f52948b2-131f-4bc4-99c6-02bde1b896b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155818977s
Jul  9 10:12:22.680: INFO: Pod "security-context-f52948b2-131f-4bc4-99c6-02bde1b896b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224016049s
Jul  9 10:12:24.732: INFO: Pod "security-context-f52948b2-131f-4bc4-99c6-02bde1b896b4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.276450574s
Jul  9 10:12:26.784: INFO: Pod "security-context-f52948b2-131f-4bc4-99c6-02bde1b896b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.328753452s
STEP: Saw pod success
Jul  9 10:12:26.784: INFO: Pod "security-context-f52948b2-131f-4bc4-99c6-02bde1b896b4" satisfied condition "Succeeded or Failed"
Jul  9 10:12:26.838: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod security-context-f52948b2-131f-4bc4-99c6-02bde1b896b4 container test-container: <nil>
STEP: delete the pod
Jul  9 10:12:26.950: INFO: Waiting for pod security-context-f52948b2-131f-4bc4-99c6-02bde1b896b4 to disappear
Jul  9 10:12:27.002: INFO: Pod security-context-f52948b2-131f-4bc4-99c6-02bde1b896b4 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.967 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":3,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:27.135: INFO: Only supported for providers [gce gke] (not aws)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 74 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:12:28.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9227" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:12:28.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv-protection
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
Jul  9 10:12:29.107: INFO: AfterEach: Cleaning up test resources.
Jul  9 10:12:29.107: INFO: Deleting PersistentVolumeClaim "pvc-9x8nr"
Jul  9 10:12:29.160: INFO: Deleting PersistentVolume "hostpath-d552q"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":5,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:29.238: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 68 lines ...
Jul  9 10:12:18.379: INFO: PersistentVolumeClaim pvc-zlrhh found but phase is Pending instead of Bound.
Jul  9 10:12:20.430: INFO: PersistentVolumeClaim pvc-zlrhh found and phase=Bound (10.299714037s)
Jul  9 10:12:20.430: INFO: Waiting up to 3m0s for PersistentVolume local-7n47p to have phase Bound
Jul  9 10:12:20.479: INFO: PersistentVolume local-7n47p found and phase=Bound (49.342055ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tkm2
STEP: Creating a pod to test subpath
Jul  9 10:12:20.628: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tkm2" in namespace "provisioning-7699" to be "Succeeded or Failed"
Jul  9 10:12:20.677: INFO: Pod "pod-subpath-test-preprovisionedpv-tkm2": Phase="Pending", Reason="", readiness=false. Elapsed: 48.72454ms
Jul  9 10:12:22.730: INFO: Pod "pod-subpath-test-preprovisionedpv-tkm2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101653936s
Jul  9 10:12:24.779: INFO: Pod "pod-subpath-test-preprovisionedpv-tkm2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151124598s
Jul  9 10:12:26.829: INFO: Pod "pod-subpath-test-preprovisionedpv-tkm2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20105729s
Jul  9 10:12:28.879: INFO: Pod "pod-subpath-test-preprovisionedpv-tkm2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.250787514s
Jul  9 10:12:30.930: INFO: Pod "pod-subpath-test-preprovisionedpv-tkm2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.301302436s
Jul  9 10:12:32.980: INFO: Pod "pod-subpath-test-preprovisionedpv-tkm2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.351762711s
STEP: Saw pod success
Jul  9 10:12:32.980: INFO: Pod "pod-subpath-test-preprovisionedpv-tkm2" satisfied condition "Succeeded or Failed"
Jul  9 10:12:33.029: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-tkm2 container test-container-subpath-preprovisionedpv-tkm2: <nil>
STEP: delete the pod
Jul  9 10:12:33.151: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tkm2 to disappear
Jul  9 10:12:33.203: INFO: Pod pod-subpath-test-preprovisionedpv-tkm2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tkm2
Jul  9 10:12:33.203: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tkm2" in namespace "provisioning-7699"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:360
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:34.454: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  9 10:12:22.420: INFO: Waiting up to 5m0s for pod "pod-5cc2c544-a033-40e0-bb82-20260f7be9f6" in namespace "emptydir-9329" to be "Succeeded or Failed"
Jul  9 10:12:22.470: INFO: Pod "pod-5cc2c544-a033-40e0-bb82-20260f7be9f6": Phase="Pending", Reason="", readiness=false. Elapsed: 50.295764ms
Jul  9 10:12:24.522: INFO: Pod "pod-5cc2c544-a033-40e0-bb82-20260f7be9f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102417335s
Jul  9 10:12:26.573: INFO: Pod "pod-5cc2c544-a033-40e0-bb82-20260f7be9f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153233722s
Jul  9 10:12:28.624: INFO: Pod "pod-5cc2c544-a033-40e0-bb82-20260f7be9f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20388232s
Jul  9 10:12:30.676: INFO: Pod "pod-5cc2c544-a033-40e0-bb82-20260f7be9f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.256527524s
Jul  9 10:12:32.728: INFO: Pod "pod-5cc2c544-a033-40e0-bb82-20260f7be9f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.308401761s
Jul  9 10:12:34.782: INFO: Pod "pod-5cc2c544-a033-40e0-bb82-20260f7be9f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.362001381s
STEP: Saw pod success
Jul  9 10:12:34.782: INFO: Pod "pod-5cc2c544-a033-40e0-bb82-20260f7be9f6" satisfied condition "Succeeded or Failed"
Jul  9 10:12:34.834: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod pod-5cc2c544-a033-40e0-bb82-20260f7be9f6 container test-container: <nil>
STEP: delete the pod
Jul  9 10:12:34.941: INFO: Waiting for pod pod-5cc2c544-a033-40e0-bb82-20260f7be9f6 to disappear
Jul  9 10:12:34.992: INFO: Pod pod-5cc2c544-a033-40e0-bb82-20260f7be9f6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    files with FSGroup ownership should support (root,0644,tmpfs)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":4,"skipped":77,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:35.142: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 182 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:12:36.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-7627" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":4,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] server version
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:12:36.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-6915" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:36.741: INFO: Only supported for providers [vsphere] (not aws)
... skipping 131 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:36.935: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 112 lines ...
• [SLOW TEST:42.126 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":2,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:41.995: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  9 10:12:35.538: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9267b06a-2d67-4f07-8823-6d65a49c8699" in namespace "projected-480" to be "Succeeded or Failed"
Jul  9 10:12:35.589: INFO: Pod "downwardapi-volume-9267b06a-2d67-4f07-8823-6d65a49c8699": Phase="Pending", Reason="", readiness=false. Elapsed: 50.69352ms
Jul  9 10:12:37.641: INFO: Pod "downwardapi-volume-9267b06a-2d67-4f07-8823-6d65a49c8699": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102344498s
Jul  9 10:12:39.693: INFO: Pod "downwardapi-volume-9267b06a-2d67-4f07-8823-6d65a49c8699": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15438905s
Jul  9 10:12:41.745: INFO: Pod "downwardapi-volume-9267b06a-2d67-4f07-8823-6d65a49c8699": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206248454s
STEP: Saw pod success
Jul  9 10:12:41.745: INFO: Pod "downwardapi-volume-9267b06a-2d67-4f07-8823-6d65a49c8699" satisfied condition "Succeeded or Failed"
Jul  9 10:12:41.795: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod downwardapi-volume-9267b06a-2d67-4f07-8823-6d65a49c8699 container client-container: <nil>
STEP: delete the pod
Jul  9 10:12:41.904: INFO: Waiting for pod downwardapi-volume-9267b06a-2d67-4f07-8823-6d65a49c8699 to disappear
Jul  9 10:12:41.957: INFO: Pod downwardapi-volume-9267b06a-2d67-4f07-8823-6d65a49c8699 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 14 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating secret secrets-6654/secret-test-d2515c73-a719-444c-80ec-0b3312e83d9d
STEP: Creating a pod to test consume secrets
Jul  9 10:12:34.834: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb2cb104-6586-404b-997d-71cd1e223040" in namespace "secrets-6654" to be "Succeeded or Failed"
Jul  9 10:12:34.883: INFO: Pod "pod-configmaps-bb2cb104-6586-404b-997d-71cd1e223040": Phase="Pending", Reason="", readiness=false. Elapsed: 49.427847ms
Jul  9 10:12:36.933: INFO: Pod "pod-configmaps-bb2cb104-6586-404b-997d-71cd1e223040": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099752635s
Jul  9 10:12:38.985: INFO: Pod "pod-configmaps-bb2cb104-6586-404b-997d-71cd1e223040": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150851533s
Jul  9 10:12:41.034: INFO: Pod "pod-configmaps-bb2cb104-6586-404b-997d-71cd1e223040": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200319601s
Jul  9 10:12:43.084: INFO: Pod "pod-configmaps-bb2cb104-6586-404b-997d-71cd1e223040": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.250230688s
STEP: Saw pod success
Jul  9 10:12:43.084: INFO: Pod "pod-configmaps-bb2cb104-6586-404b-997d-71cd1e223040" satisfied condition "Succeeded or Failed"
Jul  9 10:12:43.133: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod pod-configmaps-bb2cb104-6586-404b-997d-71cd1e223040 container env-test: <nil>
STEP: delete the pod
Jul  9 10:12:43.238: INFO: Waiting for pod pod-configmaps-bb2cb104-6586-404b-997d-71cd1e223040 to disappear
Jul  9 10:12:43.291: INFO: Pod pod-configmaps-bb2cb104-6586-404b-997d-71cd1e223040 no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.911 seconds]
[sig-node] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:43.419: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 91 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 7 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  9 10:12:37.323: INFO: Waiting up to 5m0s for pod "pod-02595da0-5cb1-4b0a-97f9-73731dffb5eb" in namespace "emptydir-2843" to be "Succeeded or Failed"
Jul  9 10:12:37.374: INFO: Pod "pod-02595da0-5cb1-4b0a-97f9-73731dffb5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 51.080716ms
Jul  9 10:12:39.426: INFO: Pod "pod-02595da0-5cb1-4b0a-97f9-73731dffb5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102798736s
Jul  9 10:12:41.479: INFO: Pod "pod-02595da0-5cb1-4b0a-97f9-73731dffb5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155645141s
Jul  9 10:12:43.533: INFO: Pod "pod-02595da0-5cb1-4b0a-97f9-73731dffb5eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209802692s
STEP: Saw pod success
Jul  9 10:12:43.533: INFO: Pod "pod-02595da0-5cb1-4b0a-97f9-73731dffb5eb" satisfied condition "Succeeded or Failed"
Jul  9 10:12:43.584: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod pod-02595da0-5cb1-4b0a-97f9-73731dffb5eb container test-container: <nil>
STEP: delete the pod
Jul  9 10:12:43.692: INFO: Waiting for pod pod-02595da0-5cb1-4b0a-97f9-73731dffb5eb to disappear
Jul  9 10:12:43.743: INFO: Pod pod-02595da0-5cb1-4b0a-97f9-73731dffb5eb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":4,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:43.860: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-6f7f9363-7d52-4fe0-b042-d3eecdca4186
STEP: Creating a pod to test consume secrets
Jul  9 10:12:29.638: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44" in namespace "projected-8903" to be "Succeeded or Failed"
Jul  9 10:12:29.690: INFO: Pod "pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44": Phase="Pending", Reason="", readiness=false. Elapsed: 51.509169ms
Jul  9 10:12:31.743: INFO: Pod "pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104430798s
Jul  9 10:12:33.796: INFO: Pod "pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157608933s
Jul  9 10:12:35.848: INFO: Pod "pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209483747s
Jul  9 10:12:37.901: INFO: Pod "pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262530535s
Jul  9 10:12:39.956: INFO: Pod "pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44": Phase="Pending", Reason="", readiness=false. Elapsed: 10.318100949s
Jul  9 10:12:42.011: INFO: Pod "pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44": Phase="Pending", Reason="", readiness=false. Elapsed: 12.372474887s
Jul  9 10:12:44.076: INFO: Pod "pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.437478448s
STEP: Saw pod success
Jul  9 10:12:44.076: INFO: Pod "pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44" satisfied condition "Succeeded or Failed"
Jul  9 10:12:44.129: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul  9 10:12:44.318: INFO: Waiting for pod pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44 to disappear
Jul  9 10:12:44.372: INFO: Pod pod-projected-secrets-0b33b4c2-9b0e-47da-ad7e-ceaabab99a44 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:15.237 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:44.516: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:9.724 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":6,"skipped":60,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:46.575: INFO: Only supported for providers [openstack] (not aws)
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:12:46.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4058" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":5,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:12:46.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":6,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:46.944: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 140 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull image [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:12:53.507: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 119 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity used, insufficient capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:03.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3435" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:11:56.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
W0709 10:11:57.028460   12299 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul  9 10:11:57.028: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-3878" for this suite.


• [SLOW TEST:68.832 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:05.662: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 39 lines ...
Jul  9 10:12:47.571: INFO: PersistentVolumeClaim pvc-496gz found but phase is Pending instead of Bound.
Jul  9 10:12:49.622: INFO: PersistentVolumeClaim pvc-496gz found and phase=Bound (2.102621771s)
Jul  9 10:12:49.622: INFO: Waiting up to 3m0s for PersistentVolume local-vrq9z to have phase Bound
Jul  9 10:12:49.679: INFO: PersistentVolume local-vrq9z found and phase=Bound (56.51136ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fjjs
STEP: Creating a pod to test subpath
Jul  9 10:12:49.834: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fjjs" in namespace "provisioning-9122" to be "Succeeded or Failed"
Jul  9 10:12:49.885: INFO: Pod "pod-subpath-test-preprovisionedpv-fjjs": Phase="Pending", Reason="", readiness=false. Elapsed: 51.176571ms
Jul  9 10:12:51.941: INFO: Pod "pod-subpath-test-preprovisionedpv-fjjs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107825855s
Jul  9 10:12:53.993: INFO: Pod "pod-subpath-test-preprovisionedpv-fjjs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15976381s
Jul  9 10:12:56.046: INFO: Pod "pod-subpath-test-preprovisionedpv-fjjs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.212508586s
STEP: Saw pod success
Jul  9 10:12:56.046: INFO: Pod "pod-subpath-test-preprovisionedpv-fjjs" satisfied condition "Succeeded or Failed"
Jul  9 10:12:56.098: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-fjjs container test-container-subpath-preprovisionedpv-fjjs: <nil>
STEP: delete the pod
Jul  9 10:12:56.212: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fjjs to disappear
Jul  9 10:12:56.264: INFO: Pod pod-subpath-test-preprovisionedpv-fjjs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fjjs
Jul  9 10:12:56.264: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fjjs" in namespace "provisioning-9122"
STEP: Creating pod pod-subpath-test-preprovisionedpv-fjjs
STEP: Creating a pod to test subpath
Jul  9 10:12:56.367: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fjjs" in namespace "provisioning-9122" to be "Succeeded or Failed"
Jul  9 10:12:56.418: INFO: Pod "pod-subpath-test-preprovisionedpv-fjjs": Phase="Pending", Reason="", readiness=false. Elapsed: 51.253921ms
Jul  9 10:12:58.471: INFO: Pod "pod-subpath-test-preprovisionedpv-fjjs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104360509s
Jul  9 10:13:00.526: INFO: Pod "pod-subpath-test-preprovisionedpv-fjjs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158537832s
Jul  9 10:13:02.577: INFO: Pod "pod-subpath-test-preprovisionedpv-fjjs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210113763s
Jul  9 10:13:04.630: INFO: Pod "pod-subpath-test-preprovisionedpv-fjjs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.262993372s
STEP: Saw pod success
Jul  9 10:13:04.630: INFO: Pod "pod-subpath-test-preprovisionedpv-fjjs" satisfied condition "Succeeded or Failed"
Jul  9 10:13:04.682: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-fjjs container test-container-subpath-preprovisionedpv-fjjs: <nil>
STEP: delete the pod
Jul  9 10:13:04.792: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fjjs to disappear
Jul  9 10:13:04.843: INFO: Pod pod-subpath-test-preprovisionedpv-fjjs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fjjs
Jul  9 10:13:04.844: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fjjs" in namespace "provisioning-9122"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:390
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Jul  9 10:13:03.890: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2667" to be "Succeeded or Failed"
Jul  9 10:13:03.940: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 50.06742ms
Jul  9 10:13:05.991: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.101005842s
STEP: Saw pod success
Jul  9 10:13:05.991: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul  9 10:13:06.041: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Jul  9 10:13:06.149: INFO: Waiting for pod pod-host-path-test to disappear
Jul  9 10:13:06.198: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:06.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2667" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
STEP: Creating a mutating webhook configuration
Jul  9 10:12:21.076: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:12:31.286: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:12:41.484: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:12:51.686: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:13:01.797: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:13:01.797: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 67 lines ...
Jul  9 10:13:02.726: INFO: 	Container csi-resizer ready: true, restart count 0
Jul  9 10:13:02.726: INFO: 	Container csi-snapshotter ready: true, restart count 0
Jul  9 10:13:02.726: INFO: 	Container ebs-plugin ready: true, restart count 0
Jul  9 10:13:02.726: INFO: 	Container liveness-probe ready: true, restart count 0
Jul  9 10:13:02.726: INFO: coredns-autoscaler-6f594f4c58-f84kr started at 2021-07-09 10:08:36 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:02.726: INFO: 	Container autoscaler ready: true, restart count 0
Jul  9 10:13:02.726: INFO: failed-jobs-history-limit-27097093--1-n6gq4 started at 2021-07-09 10:13:00 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:02.726: INFO: 	Container c ready: false, restart count 0
Jul  9 10:13:02.726: INFO: agnhost started at 2021-07-09 10:12:06 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:02.726: INFO: 	Container agnhost ready: true, restart count 0
Jul  9 10:13:02.726: INFO: ebs-csi-node-clqw7 started at 2021-07-09 10:08:36 +0000 UTC (0+3 container statuses recorded)
Jul  9 10:13:02.726: INFO: 	Container ebs-plugin ready: true, restart count 0
Jul  9 10:13:02.726: INFO: 	Container liveness-probe ready: true, restart count 0
... skipping 232 lines ...
Jul  9 10:13:06.509: INFO: 	Container csi-resizer ready: true, restart count 0
Jul  9 10:13:06.509: INFO: 	Container csi-snapshotter ready: true, restart count 0
Jul  9 10:13:06.509: INFO: 	Container ebs-plugin ready: true, restart count 0
Jul  9 10:13:06.509: INFO: 	Container liveness-probe ready: true, restart count 0
Jul  9 10:13:06.509: INFO: coredns-autoscaler-6f594f4c58-f84kr started at 2021-07-09 10:08:36 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:06.509: INFO: 	Container autoscaler ready: true, restart count 0
Jul  9 10:13:06.509: INFO: failed-jobs-history-limit-27097093--1-n6gq4 started at 2021-07-09 10:13:00 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:06.509: INFO: 	Container c ready: false, restart count 1
Jul  9 10:13:06.509: INFO: agnhost started at 2021-07-09 10:12:06 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:06.509: INFO: 	Container agnhost ready: true, restart count 0
Jul  9 10:13:06.728: INFO: 
Latency metrics for node ip-172-20-42-78.us-west-1.compute.internal
Jul  9 10:13:06.728: INFO: 
... skipping 152 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  9 10:13:01.797: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":0,"skipped":10,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:08.505: INFO: Only supported for providers [vsphere] (not aws)
... skipping 50 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  9 10:13:06.633: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51d3e499-a5a6-47b0-8d9d-a237aa889cb5" in namespace "downward-api-6329" to be "Succeeded or Failed"
Jul  9 10:13:06.683: INFO: Pod "downwardapi-volume-51d3e499-a5a6-47b0-8d9d-a237aa889cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.081885ms
Jul  9 10:13:08.733: INFO: Pod "downwardapi-volume-51d3e499-a5a6-47b0-8d9d-a237aa889cb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.100421295s
STEP: Saw pod success
Jul  9 10:13:08.733: INFO: Pod "downwardapi-volume-51d3e499-a5a6-47b0-8d9d-a237aa889cb5" satisfied condition "Succeeded or Failed"
Jul  9 10:13:08.783: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod downwardapi-volume-51d3e499-a5a6-47b0-8d9d-a237aa889cb5 container client-container: <nil>
STEP: delete the pod
Jul  9 10:13:08.892: INFO: Waiting for pod downwardapi-volume-51d3e499-a5a6-47b0-8d9d-a237aa889cb5 to disappear
Jul  9 10:13:08.943: INFO: Pod downwardapi-volume-51d3e499-a5a6-47b0-8d9d-a237aa889cb5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:08.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6329" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:09.058: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 151 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:10.602: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 219 lines ...
• [SLOW TEST:33.222 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:915
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:12.268: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Jul  9 10:13:11.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Jul  9 10:13:11.592: INFO: Waiting up to 5m0s for pod "test-pod-8e67cc82-1376-4ea9-bb09-47e1e81164d9" in namespace "svcaccounts-775" to be "Succeeded or Failed"
Jul  9 10:13:11.641: INFO: Pod "test-pod-8e67cc82-1376-4ea9-bb09-47e1e81164d9": Phase="Pending", Reason="", readiness=false. Elapsed: 48.946406ms
Jul  9 10:13:13.691: INFO: Pod "test-pod-8e67cc82-1376-4ea9-bb09-47e1e81164d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.098949729s
STEP: Saw pod success
Jul  9 10:13:13.691: INFO: Pod "test-pod-8e67cc82-1376-4ea9-bb09-47e1e81164d9" satisfied condition "Succeeded or Failed"
Jul  9 10:13:13.747: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod test-pod-8e67cc82-1376-4ea9-bb09-47e1e81164d9 container agnhost-container: <nil>
STEP: delete the pod
Jul  9 10:13:13.852: INFO: Waiting for pod test-pod-8e67cc82-1376-4ea9-bb09-47e1e81164d9 to disappear
Jul  9 10:13:13.904: INFO: Pod test-pod-8e67cc82-1376-4ea9-bb09-47e1e81164d9 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:13.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-775" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":5,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:14.016: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 85 lines ...
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2151-crds.webhook.example.com via the AdmissionRegistration API
Jul  9 10:12:29.063: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:12:39.269: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:12:49.477: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:12:59.668: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:13:09.774: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:13:09.775: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 78 lines ...
Jul  9 10:13:10.634: INFO: 	Container csi-resizer ready: true, restart count 0
Jul  9 10:13:10.634: INFO: 	Container csi-snapshotter ready: true, restart count 0
Jul  9 10:13:10.634: INFO: 	Container ebs-plugin ready: true, restart count 0
Jul  9 10:13:10.634: INFO: 	Container liveness-probe ready: true, restart count 0
Jul  9 10:13:10.634: INFO: coredns-autoscaler-6f594f4c58-f84kr started at 2021-07-09 10:08:36 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:10.634: INFO: 	Container autoscaler ready: true, restart count 0
Jul  9 10:13:10.634: INFO: failed-jobs-history-limit-27097093--1-n6gq4 started at 2021-07-09 10:13:00 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:10.634: INFO: 	Container c ready: false, restart count 1
Jul  9 10:13:10.634: INFO: agnhost started at 2021-07-09 10:12:06 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:10.634: INFO: 	Container agnhost ready: true, restart count 0
Jul  9 10:13:10.634: INFO: ebs-csi-node-clqw7 started at 2021-07-09 10:08:36 +0000 UTC (0+3 container statuses recorded)
Jul  9 10:13:10.634: INFO: 	Container ebs-plugin ready: true, restart count 0
Jul  9 10:13:10.634: INFO: 	Container liveness-probe ready: true, restart count 0
... skipping 199 lines ...
Jul  9 10:13:13.055: INFO: 	Container csi-resizer ready: true, restart count 0
Jul  9 10:13:13.055: INFO: 	Container csi-snapshotter ready: true, restart count 0
Jul  9 10:13:13.055: INFO: 	Container ebs-plugin ready: true, restart count 0
Jul  9 10:13:13.055: INFO: 	Container liveness-probe ready: true, restart count 0
Jul  9 10:13:13.055: INFO: coredns-autoscaler-6f594f4c58-f84kr started at 2021-07-09 10:08:36 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:13.055: INFO: 	Container autoscaler ready: true, restart count 0
Jul  9 10:13:13.055: INFO: failed-jobs-history-limit-27097093--1-n6gq4 started at 2021-07-09 10:13:00 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:13.055: INFO: 	Container c ready: false, restart count 1
Jul  9 10:13:13.055: INFO: agnhost started at 2021-07-09 10:12:06 +0000 UTC (0+1 container statuses recorded)
Jul  9 10:13:13.055: INFO: 	Container agnhost ready: true, restart count 0
Jul  9 10:13:13.055: INFO: ebs-csi-node-clqw7 started at 2021-07-09 10:08:36 +0000 UTC (0+3 container statuses recorded)
Jul  9 10:13:13.055: INFO: 	Container ebs-plugin ready: true, restart count 0
Jul  9 10:13:13.055: INFO: 	Container liveness-probe ready: true, restart count 0
... skipping 171 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  9 10:13:09.775: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":2,"skipped":24,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:15.062: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 195 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:992
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:993
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":8,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:13:15.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-2886833a-0a05-4ad7-a1e7-177e3cf0a521
STEP: Creating a pod to test consume configMaps
Jul  9 10:13:15.469: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9914223-b869-4f17-900a-e3c6c7ecbda1" in namespace "configmap-8564" to be "Succeeded or Failed"
Jul  9 10:13:15.520: INFO: Pod "pod-configmaps-b9914223-b869-4f17-900a-e3c6c7ecbda1": Phase="Pending", Reason="", readiness=false. Elapsed: 50.685136ms
Jul  9 10:13:17.571: INFO: Pod "pod-configmaps-b9914223-b869-4f17-900a-e3c6c7ecbda1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101802537s
Jul  9 10:13:19.623: INFO: Pod "pod-configmaps-b9914223-b869-4f17-900a-e3c6c7ecbda1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154402955s
STEP: Saw pod success
Jul  9 10:13:19.623: INFO: Pod "pod-configmaps-b9914223-b869-4f17-900a-e3c6c7ecbda1" satisfied condition "Succeeded or Failed"
Jul  9 10:13:19.675: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-configmaps-b9914223-b869-4f17-900a-e3c6c7ecbda1 container agnhost-container: <nil>
STEP: delete the pod
Jul  9 10:13:19.787: INFO: Waiting for pod pod-configmaps-b9914223-b869-4f17-900a-e3c6c7ecbda1 to disappear
Jul  9 10:13:19.842: INFO: Pod pod-configmaps-b9914223-b869-4f17-900a-e3c6c7ecbda1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 10 lines ...
Jul  9 10:12:42.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul  9 10:12:42.259: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  9 10:12:42.361: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8104" in namespace "provisioning-8104" to be "Succeeded or Failed"
Jul  9 10:12:42.411: INFO: Pod "hostpath-symlink-prep-provisioning-8104": Phase="Pending", Reason="", readiness=false. Elapsed: 49.551691ms
Jul  9 10:12:44.469: INFO: Pod "hostpath-symlink-prep-provisioning-8104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10733944s
Jul  9 10:12:46.519: INFO: Pod "hostpath-symlink-prep-provisioning-8104": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157490224s
Jul  9 10:12:48.569: INFO: Pod "hostpath-symlink-prep-provisioning-8104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207885677s
STEP: Saw pod success
Jul  9 10:12:48.569: INFO: Pod "hostpath-symlink-prep-provisioning-8104" satisfied condition "Succeeded or Failed"
Jul  9 10:12:48.569: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8104" in namespace "provisioning-8104"
Jul  9 10:12:48.625: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8104" to be fully deleted
Jul  9 10:12:48.674: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-29jd
STEP: Creating a pod to test atomic-volume-subpath
Jul  9 10:12:48.726: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-29jd" in namespace "provisioning-8104" to be "Succeeded or Failed"
Jul  9 10:12:48.776: INFO: Pod "pod-subpath-test-inlinevolume-29jd": Phase="Pending", Reason="", readiness=false. Elapsed: 49.844432ms
Jul  9 10:12:50.856: INFO: Pod "pod-subpath-test-inlinevolume-29jd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129988693s
Jul  9 10:12:52.906: INFO: Pod "pod-subpath-test-inlinevolume-29jd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180430036s
Jul  9 10:12:54.958: INFO: Pod "pod-subpath-test-inlinevolume-29jd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.231542424s
Jul  9 10:12:57.009: INFO: Pod "pod-subpath-test-inlinevolume-29jd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.282624737s
Jul  9 10:12:59.059: INFO: Pod "pod-subpath-test-inlinevolume-29jd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.332876817s
... skipping 3 lines ...
Jul  9 10:13:07.302: INFO: Pod "pod-subpath-test-inlinevolume-29jd": Phase="Running", Reason="", readiness=true. Elapsed: 18.576322506s
Jul  9 10:13:09.353: INFO: Pod "pod-subpath-test-inlinevolume-29jd": Phase="Running", Reason="", readiness=true. Elapsed: 20.627489336s
Jul  9 10:13:11.408: INFO: Pod "pod-subpath-test-inlinevolume-29jd": Phase="Running", Reason="", readiness=true. Elapsed: 22.681722654s
Jul  9 10:13:13.462: INFO: Pod "pod-subpath-test-inlinevolume-29jd": Phase="Running", Reason="", readiness=true. Elapsed: 24.736349111s
Jul  9 10:13:15.514: INFO: Pod "pod-subpath-test-inlinevolume-29jd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.787533009s
STEP: Saw pod success
Jul  9 10:13:15.514: INFO: Pod "pod-subpath-test-inlinevolume-29jd" satisfied condition "Succeeded or Failed"
Jul  9 10:13:15.563: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-29jd container test-container-subpath-inlinevolume-29jd: <nil>
STEP: delete the pod
Jul  9 10:13:15.674: INFO: Waiting for pod pod-subpath-test-inlinevolume-29jd to disappear
Jul  9 10:13:15.724: INFO: Pod pod-subpath-test-inlinevolume-29jd no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-29jd
Jul  9 10:13:15.724: INFO: Deleting pod "pod-subpath-test-inlinevolume-29jd" in namespace "provisioning-8104"
STEP: Deleting pod
Jul  9 10:13:15.774: INFO: Deleting pod "pod-subpath-test-inlinevolume-29jd" in namespace "provisioning-8104"
Jul  9 10:13:15.878: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8104" in namespace "provisioning-8104" to be "Succeeded or Failed"
Jul  9 10:13:15.931: INFO: Pod "hostpath-symlink-prep-provisioning-8104": Phase="Pending", Reason="", readiness=false. Elapsed: 53.487328ms
Jul  9 10:13:17.983: INFO: Pod "hostpath-symlink-prep-provisioning-8104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104612217s
Jul  9 10:13:20.035: INFO: Pod "hostpath-symlink-prep-provisioning-8104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156682521s
STEP: Saw pod success
Jul  9 10:13:20.035: INFO: Pod "hostpath-symlink-prep-provisioning-8104" satisfied condition "Succeeded or Failed"
Jul  9 10:13:20.035: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8104" in namespace "provisioning-8104"
Jul  9 10:13:20.091: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8104" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:20.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8104" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":107,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:12:42.075: INFO: >>> kubeConfig: /root/.kube/config
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":107,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:21.839: INFO: Only supported for providers [gce gke] (not aws)
... skipping 90 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":15,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:35.898: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 18 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:13:14.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:227
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:36.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7698" for this suite.


• [SLOW TEST:22.507 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:227
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":6,"skipped":71,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:36.645: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 67 lines ...
Jul  9 10:13:21.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
Jul  9 10:13:22.100: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  9 10:13:22.205: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7415" in namespace "provisioning-7415" to be "Succeeded or Failed"
Jul  9 10:13:22.255: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Pending", Reason="", readiness=false. Elapsed: 50.540393ms
Jul  9 10:13:24.307: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.102045188s
STEP: Saw pod success
Jul  9 10:13:24.307: INFO: Pod "hostpath-symlink-prep-provisioning-7415" satisfied condition "Succeeded or Failed"
Jul  9 10:13:24.307: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7415" in namespace "provisioning-7415"
Jul  9 10:13:24.360: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7415" to be fully deleted
Jul  9 10:13:24.411: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-csc5
Jul  9 10:13:26.565: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-7415 exec pod-subpath-test-inlinevolume-csc5 --container test-container-volume-inlinevolume-csc5 -- /bin/sh -c rm -r /test-volume/provisioning-7415'
Jul  9 10:13:27.224: INFO: stderr: ""
Jul  9 10:13:27.224: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-csc5
Jul  9 10:13:27.224: INFO: Deleting pod "pod-subpath-test-inlinevolume-csc5" in namespace "provisioning-7415"
Jul  9 10:13:27.277: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-csc5" to be fully deleted
STEP: Deleting pod
Jul  9 10:13:35.378: INFO: Deleting pod "pod-subpath-test-inlinevolume-csc5" in namespace "provisioning-7415"
Jul  9 10:13:35.479: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7415" in namespace "provisioning-7415" to be "Succeeded or Failed"
Jul  9 10:13:35.529: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Pending", Reason="", readiness=false. Elapsed: 50.366584ms
Jul  9 10:13:37.581: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.101544628s
STEP: Saw pod success
Jul  9 10:13:37.581: INFO: Pod "hostpath-symlink-prep-provisioning-7415" satisfied condition "Succeeded or Failed"
Jul  9 10:13:37.581: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7415" in namespace "provisioning-7415"
Jul  9 10:13:37.636: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7415" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:37.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7415" for this suite.
... skipping 165 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:37.888: INFO: Only supported for providers [gce gke] (not aws)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
... skipping 16 lines ...
STEP: Destroying namespace "services-3743" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:38.388: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-secret-zr9b
STEP: Creating a pod to test atomic-volume-subpath
Jul  9 10:13:15.922: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zr9b" in namespace "subpath-5949" to be "Succeeded or Failed"
Jul  9 10:13:15.971: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.806781ms
Jul  9 10:13:18.021: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099671613s
Jul  9 10:13:20.071: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.148949215s
Jul  9 10:13:22.128: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Running", Reason="", readiness=true. Elapsed: 6.20607124s
Jul  9 10:13:24.178: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Running", Reason="", readiness=true. Elapsed: 8.255920123s
Jul  9 10:13:26.228: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Running", Reason="", readiness=true. Elapsed: 10.306474125s
Jul  9 10:13:28.279: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Running", Reason="", readiness=true. Elapsed: 12.357130995s
Jul  9 10:13:30.329: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Running", Reason="", readiness=true. Elapsed: 14.40770091s
Jul  9 10:13:32.380: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Running", Reason="", readiness=true. Elapsed: 16.458222462s
Jul  9 10:13:34.431: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Running", Reason="", readiness=true. Elapsed: 18.509133192s
Jul  9 10:13:36.482: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Running", Reason="", readiness=true. Elapsed: 20.560037858s
Jul  9 10:13:38.533: INFO: Pod "pod-subpath-test-secret-zr9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.611061734s
STEP: Saw pod success
Jul  9 10:13:38.533: INFO: Pod "pod-subpath-test-secret-zr9b" satisfied condition "Succeeded or Failed"
Jul  9 10:13:38.582: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-subpath-test-secret-zr9b container test-container-subpath-secret-zr9b: <nil>
STEP: delete the pod
Jul  9 10:13:38.687: INFO: Waiting for pod pod-subpath-test-secret-zr9b to disappear
Jul  9 10:13:38.736: INFO: Pod pod-subpath-test-secret-zr9b no longer exists
STEP: Deleting pod pod-subpath-test-secret-zr9b
Jul  9 10:13:38.736: INFO: Deleting pod "pod-subpath-test-secret-zr9b" in namespace "subpath-5949"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:38.910: INFO: Only supported for providers [gce gke] (not aws)
... skipping 217 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul  9 10:13:18.818: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  9 10:13:18.871: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-txxn
STEP: Creating a pod to test atomic-volume-subpath
Jul  9 10:13:18.926: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-txxn" in namespace "provisioning-7151" to be "Succeeded or Failed"
Jul  9 10:13:18.977: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Pending", Reason="", readiness=false. Elapsed: 51.290952ms
Jul  9 10:13:21.029: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103001506s
Jul  9 10:13:23.081: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Running", Reason="", readiness=true. Elapsed: 4.155455713s
Jul  9 10:13:25.134: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Running", Reason="", readiness=true. Elapsed: 6.208000866s
Jul  9 10:13:27.186: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Running", Reason="", readiness=true. Elapsed: 8.260011721s
Jul  9 10:13:29.239: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Running", Reason="", readiness=true. Elapsed: 10.312864828s
Jul  9 10:13:31.292: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Running", Reason="", readiness=true. Elapsed: 12.36584494s
Jul  9 10:13:33.344: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Running", Reason="", readiness=true. Elapsed: 14.417868198s
Jul  9 10:13:35.396: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Running", Reason="", readiness=true. Elapsed: 16.469972489s
Jul  9 10:13:37.448: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Running", Reason="", readiness=true. Elapsed: 18.521924457s
Jul  9 10:13:39.500: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Running", Reason="", readiness=true. Elapsed: 20.573676925s
Jul  9 10:13:41.552: INFO: Pod "pod-subpath-test-inlinevolume-txxn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.625980982s
STEP: Saw pod success
Jul  9 10:13:41.552: INFO: Pod "pod-subpath-test-inlinevolume-txxn" satisfied condition "Succeeded or Failed"
Jul  9 10:13:41.603: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-txxn container test-container-subpath-inlinevolume-txxn: <nil>
STEP: delete the pod
Jul  9 10:13:41.724: INFO: Waiting for pod pod-subpath-test-inlinevolume-txxn to disappear
Jul  9 10:13:41.775: INFO: Pod pod-subpath-test-inlinevolume-txxn no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-txxn
Jul  9 10:13:41.775: INFO: Deleting pod "pod-subpath-test-inlinevolume-txxn" in namespace "provisioning-7151"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":9,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:41.995: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Jul  9 10:13:40.229: INFO: Waiting up to 5m0s for pod "pod-1265477d-2c5b-4a69-9783-cb5bf09a6384" in namespace "emptydir-657" to be "Succeeded or Failed"
Jul  9 10:13:40.280: INFO: Pod "pod-1265477d-2c5b-4a69-9783-cb5bf09a6384": Phase="Pending", Reason="", readiness=false. Elapsed: 51.042564ms
Jul  9 10:13:42.336: INFO: Pod "pod-1265477d-2c5b-4a69-9783-cb5bf09a6384": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.107000413s
STEP: Saw pod success
Jul  9 10:13:42.336: INFO: Pod "pod-1265477d-2c5b-4a69-9783-cb5bf09a6384" satisfied condition "Succeeded or Failed"
Jul  9 10:13:42.387: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-1265477d-2c5b-4a69-9783-cb5bf09a6384 container test-container: <nil>
STEP: delete the pod
Jul  9 10:13:42.496: INFO: Waiting for pod pod-1265477d-2c5b-4a69-9783-cb5bf09a6384 to disappear
Jul  9 10:13:42.547: INFO: Pod pod-1265477d-2c5b-4a69-9783-cb5bf09a6384 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:42.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-657" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":3,"skipped":32,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 68 lines ...
Jul  9 10:12:24.506: INFO: PersistentVolumeClaim csi-hostpath7fhbz found but phase is Pending instead of Bound.
Jul  9 10:12:26.558: INFO: PersistentVolumeClaim csi-hostpath7fhbz found but phase is Pending instead of Bound.
Jul  9 10:12:28.609: INFO: PersistentVolumeClaim csi-hostpath7fhbz found but phase is Pending instead of Bound.
Jul  9 10:12:30.661: INFO: PersistentVolumeClaim csi-hostpath7fhbz found and phase=Bound (24.676627067s)
STEP: Expanding non-expandable pvc
Jul  9 10:12:30.764: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul  9 10:12:30.867: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:32.971: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:34.973: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:36.973: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:38.972: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:40.971: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:42.972: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:44.972: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:46.973: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:48.970: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:50.974: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:52.973: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:54.971: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:56.970: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:12:58.976: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:13:00.972: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  9 10:13:01.076: INFO: Error updating pvc csi-hostpath7fhbz: persistentvolumeclaims "csi-hostpath7fhbz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul  9 10:13:01.076: INFO: Deleting PersistentVolumeClaim "csi-hostpath7fhbz"
Jul  9 10:13:01.128: INFO: Waiting up to 5m0s for PersistentVolume pvc-b9ed55db-2606-4c79-8ac8-e2bcbe54e0a0 to get deleted
Jul  9 10:13:01.179: INFO: PersistentVolume pvc-b9ed55db-2606-4c79-8ac8-e2bcbe54e0a0 found and phase=Released (50.989727ms)
Jul  9 10:13:06.231: INFO: PersistentVolume pvc-b9ed55db-2606-4c79-8ac8-e2bcbe54e0a0 was removed
STEP: Deleting sc
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:42.759: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":2,"skipped":18,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:45.078: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul  9 10:13:42.267: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  9 10:13:42.267: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-qwcp
STEP: Creating a pod to test subpath
Jul  9 10:13:42.325: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qwcp" in namespace "provisioning-9923" to be "Succeeded or Failed"
Jul  9 10:13:42.376: INFO: Pod "pod-subpath-test-inlinevolume-qwcp": Phase="Pending", Reason="", readiness=false. Elapsed: 51.099918ms
Jul  9 10:13:44.429: INFO: Pod "pod-subpath-test-inlinevolume-qwcp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103476421s
Jul  9 10:13:46.482: INFO: Pod "pod-subpath-test-inlinevolume-qwcp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156505591s
STEP: Saw pod success
Jul  9 10:13:46.482: INFO: Pod "pod-subpath-test-inlinevolume-qwcp" satisfied condition "Succeeded or Failed"
Jul  9 10:13:46.534: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-qwcp container test-container-volume-inlinevolume-qwcp: <nil>
STEP: delete the pod
Jul  9 10:13:46.645: INFO: Waiting for pod pod-subpath-test-inlinevolume-qwcp to disappear
Jul  9 10:13:46.696: INFO: Pod pod-subpath-test-inlinevolume-qwcp no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-qwcp
Jul  9 10:13:46.696: INFO: Deleting pod "pod-subpath-test-inlinevolume-qwcp" in namespace "provisioning-9923"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:46.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9923" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":10,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:13:42.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  9 10:13:43.209: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-148f4b35-45c7-4e8e-8b1b-5e74da4acd41" in namespace "security-context-test-5374" to be "Succeeded or Failed"
Jul  9 10:13:43.260: INFO: Pod "busybox-readonly-false-148f4b35-45c7-4e8e-8b1b-5e74da4acd41": Phase="Pending", Reason="", readiness=false. Elapsed: 51.349182ms
Jul  9 10:13:45.312: INFO: Pod "busybox-readonly-false-148f4b35-45c7-4e8e-8b1b-5e74da4acd41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103491577s
Jul  9 10:13:47.365: INFO: Pod "busybox-readonly-false-148f4b35-45c7-4e8e-8b1b-5e74da4acd41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156052318s
Jul  9 10:13:49.417: INFO: Pod "busybox-readonly-false-148f4b35-45c7-4e8e-8b1b-5e74da4acd41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.208721581s
Jul  9 10:13:49.417: INFO: Pod "busybox-readonly-false-148f4b35-45c7-4e8e-8b1b-5e74da4acd41" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:49.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5374" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":62,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:13:46.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-74054917-aea5-495e-81aa-02aa371b1359
STEP: Creating a pod to test consume secrets
Jul  9 10:13:47.282: INFO: Waiting up to 5m0s for pod "pod-secrets-139900f1-b846-4885-9039-da2be7c66ec9" in namespace "secrets-6938" to be "Succeeded or Failed"
Jul  9 10:13:47.333: INFO: Pod "pod-secrets-139900f1-b846-4885-9039-da2be7c66ec9": Phase="Pending", Reason="", readiness=false. Elapsed: 51.271433ms
Jul  9 10:13:49.384: INFO: Pod "pod-secrets-139900f1-b846-4885-9039-da2be7c66ec9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.102724315s
STEP: Saw pod success
Jul  9 10:13:49.384: INFO: Pod "pod-secrets-139900f1-b846-4885-9039-da2be7c66ec9" satisfied condition "Succeeded or Failed"
Jul  9 10:13:49.436: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-secrets-139900f1-b846-4885-9039-da2be7c66ec9 container secret-volume-test: <nil>
STEP: delete the pod
Jul  9 10:13:49.554: INFO: Waiting for pod pod-secrets-139900f1-b846-4885-9039-da2be7c66ec9 to disappear
Jul  9 10:13:49.607: INFO: Pod pod-secrets-139900f1-b846-4885-9039-da2be7c66ec9 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:49.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6938" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":44,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:49.757: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 23 lines ...
Jul  9 10:12:46.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Jul  9 10:12:47.260: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  9 10:12:47.366: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-1895" in namespace "volume-1895" to be "Succeeded or Failed"
Jul  9 10:12:47.417: INFO: Pod "hostpath-symlink-prep-volume-1895": Phase="Pending", Reason="", readiness=false. Elapsed: 51.325718ms
Jul  9 10:12:49.469: INFO: Pod "hostpath-symlink-prep-volume-1895": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103818586s
Jul  9 10:12:51.531: INFO: Pod "hostpath-symlink-prep-volume-1895": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165779041s
Jul  9 10:12:53.583: INFO: Pod "hostpath-symlink-prep-volume-1895": Phase="Pending", Reason="", readiness=false. Elapsed: 6.217231593s
Jul  9 10:12:55.635: INFO: Pod "hostpath-symlink-prep-volume-1895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.268986934s
STEP: Saw pod success
Jul  9 10:12:55.635: INFO: Pod "hostpath-symlink-prep-volume-1895" satisfied condition "Succeeded or Failed"
Jul  9 10:12:55.635: INFO: Deleting pod "hostpath-symlink-prep-volume-1895" in namespace "volume-1895"
Jul  9 10:12:55.694: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-1895" to be fully deleted
Jul  9 10:12:55.746: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Jul  9 10:13:07.904: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-1895 exec hostpathsymlink-injector --namespace=volume-1895 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-1895' > /opt/0/index.html'
... skipping 40 lines ...
Jul  9 10:13:37.927: INFO: Pod hostpathsymlink-client still exists
Jul  9 10:13:39.872: INFO: Waiting for pod hostpathsymlink-client to disappear
Jul  9 10:13:39.924: INFO: Pod hostpathsymlink-client still exists
Jul  9 10:13:41.872: INFO: Waiting for pod hostpathsymlink-client to disappear
Jul  9 10:13:41.924: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Jul  9 10:13:41.976: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-1895" in namespace "volume-1895" to be "Succeeded or Failed"
Jul  9 10:13:42.027: INFO: Pod "hostpath-symlink-prep-volume-1895": Phase="Pending", Reason="", readiness=false. Elapsed: 51.178298ms
Jul  9 10:13:44.079: INFO: Pod "hostpath-symlink-prep-volume-1895": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103215167s
Jul  9 10:13:46.131: INFO: Pod "hostpath-symlink-prep-volume-1895": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155133219s
Jul  9 10:13:48.183: INFO: Pod "hostpath-symlink-prep-volume-1895": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20762661s
Jul  9 10:13:50.236: INFO: Pod "hostpath-symlink-prep-volume-1895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.25989516s
STEP: Saw pod success
Jul  9 10:13:50.236: INFO: Pod "hostpath-symlink-prep-volume-1895" satisfied condition "Succeeded or Failed"
Jul  9 10:13:50.236: INFO: Deleting pod "hostpath-symlink-prep-volume-1895" in namespace "volume-1895"
Jul  9 10:13:50.294: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-1895" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:50.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1895" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":7,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:50.461: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:50.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-9453" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":4,"skipped":73,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:50.656: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":109,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:13:37.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:439
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":8,"skipped":109,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:50.897: INFO: Only supported for providers [gce gke] (not aws)
... skipping 155 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should modify fsGroup if fsGroupPolicy=File
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:13:50.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul  9 10:13:50.986: INFO: Waiting up to 5m0s for pod "downward-api-03b12b6b-df77-4157-87fc-b5d4f63d0660" in namespace "downward-api-896" to be "Succeeded or Failed"
Jul  9 10:13:51.040: INFO: Pod "downward-api-03b12b6b-df77-4157-87fc-b5d4f63d0660": Phase="Pending", Reason="", readiness=false. Elapsed: 54.239649ms
Jul  9 10:13:53.092: INFO: Pod "downward-api-03b12b6b-df77-4157-87fc-b5d4f63d0660": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.105892514s
STEP: Saw pod success
Jul  9 10:13:53.092: INFO: Pod "downward-api-03b12b6b-df77-4157-87fc-b5d4f63d0660" satisfied condition "Succeeded or Failed"
Jul  9 10:13:53.143: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod downward-api-03b12b6b-df77-4157-87fc-b5d4f63d0660 container dapi-container: <nil>
STEP: delete the pod
Jul  9 10:13:53.258: INFO: Waiting for pod downward-api-03b12b6b-df77-4157-87fc-b5d4f63d0660 to disappear
Jul  9 10:13:53.314: INFO: Pod downward-api-03b12b6b-df77-4157-87fc-b5d4f63d0660 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:53.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-896" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":83,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:13:55.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2756" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:13:55.867: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 23 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 31 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul  9 10:13:36.942: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  9 10:13:36.942: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-qnmx
STEP: Creating a pod to test atomic-volume-subpath
Jul  9 10:13:36.994: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qnmx" in namespace "provisioning-5604" to be "Succeeded or Failed"
Jul  9 10:13:37.043: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Pending", Reason="", readiness=false. Elapsed: 49.069752ms
Jul  9 10:13:39.093: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098851011s
Jul  9 10:13:41.143: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Running", Reason="", readiness=true. Elapsed: 4.149564203s
Jul  9 10:13:43.194: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Running", Reason="", readiness=true. Elapsed: 6.20002101s
Jul  9 10:13:45.246: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Running", Reason="", readiness=true. Elapsed: 8.252248739s
Jul  9 10:13:47.298: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Running", Reason="", readiness=true. Elapsed: 10.304097351s
Jul  9 10:13:49.348: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Running", Reason="", readiness=true. Elapsed: 12.353938576s
Jul  9 10:13:51.398: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Running", Reason="", readiness=true. Elapsed: 14.403849963s
Jul  9 10:13:53.448: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Running", Reason="", readiness=true. Elapsed: 16.454405055s
Jul  9 10:13:55.500: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Running", Reason="", readiness=true. Elapsed: 18.506429743s
Jul  9 10:13:57.551: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Running", Reason="", readiness=true. Elapsed: 20.556840304s
Jul  9 10:13:59.602: INFO: Pod "pod-subpath-test-inlinevolume-qnmx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.608220524s
STEP: Saw pod success
Jul  9 10:13:59.602: INFO: Pod "pod-subpath-test-inlinevolume-qnmx" satisfied condition "Succeeded or Failed"
Jul  9 10:13:59.652: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-qnmx container test-container-subpath-inlinevolume-qnmx: <nil>
STEP: delete the pod
Jul  9 10:13:59.758: INFO: Waiting for pod pod-subpath-test-inlinevolume-qnmx to disappear
Jul  9 10:13:59.807: INFO: Pod pod-subpath-test-inlinevolume-qnmx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-qnmx
Jul  9 10:13:59.807: INFO: Deleting pod "pod-subpath-test-inlinevolume-qnmx" in namespace "provisioning-5604"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":94,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:00.021: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 71 lines ...
• [SLOW TEST:10.959 seconds]
[sig-node] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":123,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 169 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":5,"skipped":38,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:03.097: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 74 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:04.476: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 185 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:08.366: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 53 lines ...
Jul  9 10:14:04.123: INFO: PersistentVolumeClaim pvc-c6kzj found but phase is Pending instead of Bound.
Jul  9 10:14:06.175: INFO: PersistentVolumeClaim pvc-c6kzj found and phase=Bound (12.362989761s)
Jul  9 10:14:06.175: INFO: Waiting up to 3m0s for PersistentVolume local-j72wq to have phase Bound
Jul  9 10:14:06.226: INFO: PersistentVolume local-j72wq found and phase=Bound (51.248367ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4d78
STEP: Creating a pod to test subpath
Jul  9 10:14:06.381: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4d78" in namespace "provisioning-1341" to be "Succeeded or Failed"
Jul  9 10:14:06.432: INFO: Pod "pod-subpath-test-preprovisionedpv-4d78": Phase="Pending", Reason="", readiness=false. Elapsed: 51.387052ms
Jul  9 10:14:08.484: INFO: Pod "pod-subpath-test-preprovisionedpv-4d78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103502126s
Jul  9 10:14:10.537: INFO: Pod "pod-subpath-test-preprovisionedpv-4d78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156049741s
STEP: Saw pod success
Jul  9 10:14:10.537: INFO: Pod "pod-subpath-test-preprovisionedpv-4d78" satisfied condition "Succeeded or Failed"
Jul  9 10:14:10.588: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-4d78 container test-container-subpath-preprovisionedpv-4d78: <nil>
STEP: delete the pod
Jul  9 10:14:10.697: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4d78 to disappear
Jul  9 10:14:10.748: INFO: Pod pod-subpath-test-preprovisionedpv-4d78 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4d78
Jul  9 10:14:10.748: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4d78" in namespace "provisioning-1341"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:360
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":12,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:12.729: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 135 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Jul  9 10:14:04.812: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-858b7953-0eaa-4373-9063-ce13a761dfd2" in namespace "security-context-test-1954" to be "Succeeded or Failed"
Jul  9 10:14:04.861: INFO: Pod "alpine-nnp-true-858b7953-0eaa-4373-9063-ce13a761dfd2": Phase="Pending", Reason="", readiness=false. Elapsed: 49.821777ms
Jul  9 10:14:06.913: INFO: Pod "alpine-nnp-true-858b7953-0eaa-4373-9063-ce13a761dfd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101127862s
Jul  9 10:14:08.963: INFO: Pod "alpine-nnp-true-858b7953-0eaa-4373-9063-ce13a761dfd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151921402s
Jul  9 10:14:11.014: INFO: Pod "alpine-nnp-true-858b7953-0eaa-4373-9063-ce13a761dfd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202500213s
Jul  9 10:14:13.065: INFO: Pod "alpine-nnp-true-858b7953-0eaa-4373-9063-ce13a761dfd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.252973954s
Jul  9 10:14:13.065: INFO: Pod "alpine-nnp-true-858b7953-0eaa-4373-9063-ce13a761dfd2" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:13.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1954" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:13.242: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 119 lines ...
Jul  9 10:13:42.943: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-1567c8jnd
STEP: creating a claim
Jul  9 10:13:42.994: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Jul  9 10:13:43.101: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul  9 10:13:43.208: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:13:45.313: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:13:47.313: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:13:49.313: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:13:51.315: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:13:53.314: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:13:55.316: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:13:57.313: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:13:59.315: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:14:01.314: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:14:03.314: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:14:05.318: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:14:07.313: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:14:09.336: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:14:11.311: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:14:13.324: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1567c8jnd",
  	... // 3 identical fields
  }

Jul  9 10:14:13.427: INFO: Error updating pvc aws7n6sj: PersistentVolumeClaim "aws7n6sj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":4,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:13.700: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 88 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:14.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-7657" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":5,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:14.621: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 201 lines ...
• [SLOW TEST:6.512 seconds]
[sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":3,"skipped":32,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:15.379: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 36 lines ...
Jul  9 10:14:02.563: INFO: PersistentVolumeClaim pvc-5dc4l found but phase is Pending instead of Bound.
Jul  9 10:14:04.614: INFO: PersistentVolumeClaim pvc-5dc4l found and phase=Bound (4.154355665s)
Jul  9 10:14:04.614: INFO: Waiting up to 3m0s for PersistentVolume local-6h7tr to have phase Bound
Jul  9 10:14:04.666: INFO: PersistentVolume local-6h7tr found and phase=Bound (51.123023ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-czlt
STEP: Creating a pod to test exec-volume-test
Jul  9 10:14:04.823: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-czlt" in namespace "volume-7546" to be "Succeeded or Failed"
Jul  9 10:14:04.874: INFO: Pod "exec-volume-test-preprovisionedpv-czlt": Phase="Pending", Reason="", readiness=false. Elapsed: 50.991817ms
Jul  9 10:14:06.926: INFO: Pod "exec-volume-test-preprovisionedpv-czlt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103441023s
Jul  9 10:14:08.978: INFO: Pod "exec-volume-test-preprovisionedpv-czlt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155243438s
Jul  9 10:14:11.030: INFO: Pod "exec-volume-test-preprovisionedpv-czlt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207607524s
Jul  9 10:14:13.082: INFO: Pod "exec-volume-test-preprovisionedpv-czlt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258941864s
Jul  9 10:14:15.134: INFO: Pod "exec-volume-test-preprovisionedpv-czlt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.31163203s
STEP: Saw pod success
Jul  9 10:14:15.134: INFO: Pod "exec-volume-test-preprovisionedpv-czlt" satisfied condition "Succeeded or Failed"
Jul  9 10:14:15.185: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-czlt container exec-container-preprovisionedpv-czlt: <nil>
STEP: delete the pod
Jul  9 10:14:15.314: INFO: Waiting for pod exec-volume-test-preprovisionedpv-czlt to disappear
Jul  9 10:14:15.369: INFO: Pod exec-volume-test-preprovisionedpv-czlt no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-czlt
Jul  9 10:14:15.369: INFO: Deleting pod "exec-volume-test-preprovisionedpv-czlt" in namespace "volume-7546"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":84,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:16.127: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 68 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-d95fce5e-d54a-4da5-bd90-07ee42c5c88d
STEP: Creating a pod to test consume secrets
Jul  9 10:14:15.153: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b0351cbf-a89c-4fe1-bf2d-ebf40225953d" in namespace "projected-6093" to be "Succeeded or Failed"
Jul  9 10:14:15.203: INFO: Pod "pod-projected-secrets-b0351cbf-a89c-4fe1-bf2d-ebf40225953d": Phase="Pending", Reason="", readiness=false. Elapsed: 50.065308ms
Jul  9 10:14:17.254: INFO: Pod "pod-projected-secrets-b0351cbf-a89c-4fe1-bf2d-ebf40225953d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.101879206s
STEP: Saw pod success
Jul  9 10:14:17.255: INFO: Pod "pod-projected-secrets-b0351cbf-a89c-4fe1-bf2d-ebf40225953d" satisfied condition "Succeeded or Failed"
Jul  9 10:14:17.305: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-projected-secrets-b0351cbf-a89c-4fe1-bf2d-ebf40225953d container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul  9 10:14:17.414: INFO: Waiting for pod pod-projected-secrets-b0351cbf-a89c-4fe1-bf2d-ebf40225953d to disappear
Jul  9 10:14:17.463: INFO: Pod pod-projected-secrets-b0351cbf-a89c-4fe1-bf2d-ebf40225953d no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:17.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6093" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":85,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:17.589: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 93 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-5c26940d-6652-4655-b5eb-9a618e0b9d0a
STEP: Creating a pod to test consume configMaps
Jul  9 10:14:15.325: INFO: Waiting up to 5m0s for pod "pod-configmaps-63ee7ce1-e131-4fcd-ae04-a3b904fb01a4" in namespace "configmap-850" to be "Succeeded or Failed"
Jul  9 10:14:15.375: INFO: Pod "pod-configmaps-63ee7ce1-e131-4fcd-ae04-a3b904fb01a4": Phase="Pending", Reason="", readiness=false. Elapsed: 50.314011ms
Jul  9 10:14:17.426: INFO: Pod "pod-configmaps-63ee7ce1-e131-4fcd-ae04-a3b904fb01a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.101506758s
STEP: Saw pod success
Jul  9 10:14:17.426: INFO: Pod "pod-configmaps-63ee7ce1-e131-4fcd-ae04-a3b904fb01a4" satisfied condition "Succeeded or Failed"
Jul  9 10:14:17.476: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod pod-configmaps-63ee7ce1-e131-4fcd-ae04-a3b904fb01a4 container agnhost-container: <nil>
STEP: delete the pod
Jul  9 10:14:17.590: INFO: Waiting for pod pod-configmaps-63ee7ce1-e131-4fcd-ae04-a3b904fb01a4 to disappear
Jul  9 10:14:17.642: INFO: Pod pod-configmaps-63ee7ce1-e131-4fcd-ae04-a3b904fb01a4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:17.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-850" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:17.778: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  9 10:14:17.956: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d8a3039-2396-4840-9575-8ba54274014e" in namespace "downward-api-2740" to be "Succeeded or Failed"
Jul  9 10:14:18.006: INFO: Pod "downwardapi-volume-7d8a3039-2396-4840-9575-8ba54274014e": Phase="Pending", Reason="", readiness=false. Elapsed: 49.762467ms
Jul  9 10:14:20.057: INFO: Pod "downwardapi-volume-7d8a3039-2396-4840-9575-8ba54274014e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.10070651s
STEP: Saw pod success
Jul  9 10:14:20.057: INFO: Pod "downwardapi-volume-7d8a3039-2396-4840-9575-8ba54274014e" satisfied condition "Succeeded or Failed"
Jul  9 10:14:20.107: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod downwardapi-volume-7d8a3039-2396-4840-9575-8ba54274014e container client-container: <nil>
STEP: delete the pod
Jul  9 10:14:20.214: INFO: Waiting for pod downwardapi-volume-7d8a3039-2396-4840-9575-8ba54274014e to disappear
Jul  9 10:14:20.264: INFO: Pod downwardapi-volume-7d8a3039-2396-4840-9575-8ba54274014e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:20.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2740" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":103,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":13,"skipped":70,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:24.393: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:24.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-96" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":8,"skipped":106,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:24.890: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 42 lines ...
Jul  9 10:14:06.780: INFO: PersistentVolumeClaim pvc-bktkg found and phase=Bound (49.757412ms)
Jul  9 10:14:06.780: INFO: Waiting up to 3m0s for PersistentVolume nfs-jzqcz to have phase Bound
Jul  9 10:14:06.830: INFO: PersistentVolume nfs-jzqcz found and phase=Bound (49.799036ms)
STEP: Checking pod has write access to PersistentVolume
Jul  9 10:14:06.929: INFO: Creating nfs test pod
Jul  9 10:14:06.985: INFO: Pod should terminate with exitcode 0 (success)
Jul  9 10:14:06.985: INFO: Waiting up to 5m0s for pod "pvc-tester-xxr57" in namespace "pv-9380" to be "Succeeded or Failed"
Jul  9 10:14:07.035: INFO: Pod "pvc-tester-xxr57": Phase="Pending", Reason="", readiness=false. Elapsed: 49.625039ms
Jul  9 10:14:09.087: INFO: Pod "pvc-tester-xxr57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.101483716s
STEP: Saw pod success
Jul  9 10:14:09.087: INFO: Pod "pvc-tester-xxr57" satisfied condition "Succeeded or Failed"
Jul  9 10:14:09.087: INFO: Pod pvc-tester-xxr57 succeeded 
Jul  9 10:14:09.087: INFO: Deleting pod "pvc-tester-xxr57" in namespace "pv-9380"
Jul  9 10:14:09.144: INFO: Wait up to 5m0s for pod "pvc-tester-xxr57" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jul  9 10:14:09.197: INFO: Deleting PVC pvc-bktkg to trigger reclamation of PV 
Jul  9 10:14:09.197: INFO: Deleting PersistentVolumeClaim "pvc-bktkg"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":3,"skipped":20,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:25.836: INFO: Only supported for providers [azure] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  9 10:14:25.206: INFO: Waiting up to 5m0s for pod "pod-392e5cbd-f199-4200-9134-4bb17d920b07" in namespace "emptydir-8056" to be "Succeeded or Failed"
Jul  9 10:14:25.256: INFO: Pod "pod-392e5cbd-f199-4200-9134-4bb17d920b07": Phase="Pending", Reason="", readiness=false. Elapsed: 49.908004ms
Jul  9 10:14:27.307: INFO: Pod "pod-392e5cbd-f199-4200-9134-4bb17d920b07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.100682996s
STEP: Saw pod success
Jul  9 10:14:27.307: INFO: Pod "pod-392e5cbd-f199-4200-9134-4bb17d920b07" satisfied condition "Succeeded or Failed"
Jul  9 10:14:27.357: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod pod-392e5cbd-f199-4200-9134-4bb17d920b07 container test-container: <nil>
STEP: delete the pod
Jul  9 10:14:27.462: INFO: Waiting for pod pod-392e5cbd-f199-4200-9134-4bb17d920b07 to disappear
Jul  9 10:14:27.512: INFO: Pod pod-392e5cbd-f199-4200-9134-4bb17d920b07 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 12 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:30.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-2108" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:30.544: INFO: Only supported for providers [gce gke] (not aws)
... skipping 206 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":29,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Jul  9 10:14:04.104: INFO: PersistentVolumeClaim pvc-fxfkp found but phase is Pending instead of Bound.
Jul  9 10:14:06.154: INFO: PersistentVolumeClaim pvc-fxfkp found and phase=Bound (2.09932971s)
Jul  9 10:14:06.154: INFO: Waiting up to 3m0s for PersistentVolume local-z5l7d to have phase Bound
Jul  9 10:14:06.203: INFO: PersistentVolume local-z5l7d found and phase=Bound (48.743087ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nt4n
STEP: Creating a pod to test atomic-volume-subpath
Jul  9 10:14:06.352: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nt4n" in namespace "provisioning-800" to be "Succeeded or Failed"
Jul  9 10:14:06.405: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Pending", Reason="", readiness=false. Elapsed: 53.319436ms
Jul  9 10:14:08.455: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10331507s
Jul  9 10:14:10.508: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Running", Reason="", readiness=true. Elapsed: 4.155884563s
Jul  9 10:14:12.562: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Running", Reason="", readiness=true. Elapsed: 6.210167586s
Jul  9 10:14:14.613: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Running", Reason="", readiness=true. Elapsed: 8.260973133s
Jul  9 10:14:16.663: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Running", Reason="", readiness=true. Elapsed: 10.310992156s
Jul  9 10:14:18.713: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Running", Reason="", readiness=true. Elapsed: 12.361024741s
Jul  9 10:14:20.763: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Running", Reason="", readiness=true. Elapsed: 14.41173055s
Jul  9 10:14:22.814: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Running", Reason="", readiness=true. Elapsed: 16.462221104s
Jul  9 10:14:24.863: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Running", Reason="", readiness=true. Elapsed: 18.511836264s
Jul  9 10:14:26.913: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Running", Reason="", readiness=true. Elapsed: 20.561466961s
Jul  9 10:14:28.963: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.611109478s
STEP: Saw pod success
Jul  9 10:14:28.963: INFO: Pod "pod-subpath-test-preprovisionedpv-nt4n" satisfied condition "Succeeded or Failed"
Jul  9 10:14:29.012: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-nt4n container test-container-subpath-preprovisionedpv-nt4n: <nil>
STEP: delete the pod
Jul  9 10:14:29.120: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nt4n to disappear
Jul  9 10:14:29.171: INFO: Pod pod-subpath-test-preprovisionedpv-nt4n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nt4n
Jul  9 10:14:29.172: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nt4n" in namespace "provisioning-800"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":95,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:31.116: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":12,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:13:20.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 41 lines ...
Jul  9 10:13:22.634: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1438 to register on node ip-172-20-54-0.us-west-1.compute.internal
STEP: Creating pod
Jul  9 10:13:32.386: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  9 10:13:32.438: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-p5f8v] to have phase Bound
Jul  9 10:13:32.487: INFO: PersistentVolumeClaim pvc-p5f8v found and phase=Bound (49.548096ms)
STEP: checking for CSIInlineVolumes feature
Jul  9 10:13:52.841: INFO: Error getting logs for pod inline-volume-xd97g: the server rejected our request for an unknown reason (get pods inline-volume-xd97g)
Jul  9 10:13:52.940: INFO: Deleting pod "inline-volume-xd97g" in namespace "csi-mock-volumes-1438"
Jul  9 10:13:52.992: INFO: Wait up to 5m0s for pod "inline-volume-xd97g" to be fully deleted
STEP: Deleting the previously created pod
Jul  9 10:14:05.093: INFO: Deleting pod "pvc-volume-tester-qdwvj" in namespace "csi-mock-volumes-1438"
Jul  9 10:14:05.148: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qdwvj" to be fully deleted
STEP: Checking CSI driver logs
Jul  9 10:14:11.300: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-qdwvj
Jul  9 10:14:11.300: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-1438
Jul  9 10:14:11.300: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 9bc38563-fd35-4929-a1a7-4c8c2968b92e
Jul  9 10:14:11.300: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jul  9 10:14:11.300: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Jul  9 10:14:11.300: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/9bc38563-fd35-4929-a1a7-4c8c2968b92e/volumes/kubernetes.io~csi/pvc-7a64e6b2-1aa0-45b4-9b93-eb0319300e8d/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-qdwvj
Jul  9 10:14:11.300: INFO: Deleting pod "pvc-volume-tester-qdwvj" in namespace "csi-mock-volumes-1438"
STEP: Deleting claim pvc-p5f8v
Jul  9 10:14:11.450: INFO: Waiting up to 2m0s for PersistentVolume pvc-7a64e6b2-1aa0-45b4-9b93-eb0319300e8d to get deleted
Jul  9 10:14:11.502: INFO: PersistentVolume pvc-7a64e6b2-1aa0-45b4-9b93-eb0319300e8d was removed
STEP: Deleting storageclass csi-mock-volumes-1438-sc5kpdc
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":4,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:31.535: INFO: Only supported for providers [gce gke] (not aws)
... skipping 14 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":9,"skipped":109,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:14:27.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  9 10:14:27.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff8f3147-d4b7-451d-af7e-0cb90259c6a3" in namespace "projected-7081" to be "Succeeded or Failed"
Jul  9 10:14:27.976: INFO: Pod "downwardapi-volume-ff8f3147-d4b7-451d-af7e-0cb90259c6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 49.723801ms
Jul  9 10:14:30.027: INFO: Pod "downwardapi-volume-ff8f3147-d4b7-451d-af7e-0cb90259c6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100650376s
Jul  9 10:14:32.078: INFO: Pod "downwardapi-volume-ff8f3147-d4b7-451d-af7e-0cb90259c6a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151889266s
STEP: Saw pod success
Jul  9 10:14:32.079: INFO: Pod "downwardapi-volume-ff8f3147-d4b7-451d-af7e-0cb90259c6a3" satisfied condition "Succeeded or Failed"
Jul  9 10:14:32.129: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod downwardapi-volume-ff8f3147-d4b7-451d-af7e-0cb90259c6a3 container client-container: <nil>
STEP: delete the pod
Jul  9 10:14:32.235: INFO: Waiting for pod downwardapi-volume-ff8f3147-d4b7-451d-af7e-0cb90259c6a3 to disappear
Jul  9 10:14:32.284: INFO: Pod downwardapi-volume-ff8f3147-d4b7-451d-af7e-0cb90259c6a3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:32.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7081" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":109,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:32.419: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":10,"skipped":124,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:33.058: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:33.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-1219" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":9,"skipped":99,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:33.860: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 68 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":69,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:14:32.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:40.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8733" for this suite.


• [SLOW TEST:8.453 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":11,"skipped":124,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:40.941: INFO: Only supported for providers [azure] (not aws)
... skipping 80 lines ...
Jul  9 10:14:34.035: INFO: PersistentVolumeClaim pvc-j62nk found but phase is Pending instead of Bound.
Jul  9 10:14:36.087: INFO: PersistentVolumeClaim pvc-j62nk found and phase=Bound (16.469805061s)
Jul  9 10:14:36.087: INFO: Waiting up to 3m0s for PersistentVolume local-ffkhk to have phase Bound
Jul  9 10:14:36.138: INFO: PersistentVolume local-ffkhk found and phase=Bound (50.980841ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7g6k
STEP: Creating a pod to test subpath
Jul  9 10:14:36.292: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7g6k" in namespace "provisioning-4143" to be "Succeeded or Failed"
Jul  9 10:14:36.343: INFO: Pod "pod-subpath-test-preprovisionedpv-7g6k": Phase="Pending", Reason="", readiness=false. Elapsed: 50.951184ms
Jul  9 10:14:38.395: INFO: Pod "pod-subpath-test-preprovisionedpv-7g6k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102278111s
Jul  9 10:14:40.446: INFO: Pod "pod-subpath-test-preprovisionedpv-7g6k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154045459s
STEP: Saw pod success
Jul  9 10:14:40.446: INFO: Pod "pod-subpath-test-preprovisionedpv-7g6k" satisfied condition "Succeeded or Failed"
Jul  9 10:14:40.498: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-7g6k container test-container-subpath-preprovisionedpv-7g6k: <nil>
STEP: delete the pod
Jul  9 10:14:40.618: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7g6k to disappear
Jul  9 10:14:40.669: INFO: Pod pod-subpath-test-preprovisionedpv-7g6k no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7g6k
Jul  9 10:14:40.669: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7g6k" in namespace "provisioning-4143"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":114,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:14:40.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  9 10:14:41.275: INFO: Waiting up to 5m0s for pod "pod-28ba06fb-92a4-4ae2-9f1a-4f0b76fda6bf" in namespace "emptydir-2755" to be "Succeeded or Failed"
Jul  9 10:14:41.325: INFO: Pod "pod-28ba06fb-92a4-4ae2-9f1a-4f0b76fda6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 50.031213ms
Jul  9 10:14:43.376: INFO: Pod "pod-28ba06fb-92a4-4ae2-9f1a-4f0b76fda6bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.100611558s
STEP: Saw pod success
Jul  9 10:14:43.376: INFO: Pod "pod-28ba06fb-92a4-4ae2-9f1a-4f0b76fda6bf" satisfied condition "Succeeded or Failed"
Jul  9 10:14:43.426: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod pod-28ba06fb-92a4-4ae2-9f1a-4f0b76fda6bf container test-container: <nil>
STEP: delete the pod
Jul  9 10:14:43.538: INFO: Waiting for pod pod-28ba06fb-92a4-4ae2-9f1a-4f0b76fda6bf to disappear
Jul  9 10:14:43.588: INFO: Pod pod-28ba06fb-92a4-4ae2-9f1a-4f0b76fda6bf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 80 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:43.752: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:44.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7007" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":8,"skipped":118,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
Jul  9 10:13:07.869: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7117
Jul  9 10:13:07.919: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7117
Jul  9 10:13:07.970: INFO: creating *v1.StatefulSet: csi-mock-volumes-7117-2020/csi-mockplugin
Jul  9 10:13:08.024: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7117
Jul  9 10:13:08.077: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7117"
Jul  9 10:13:08.127: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7117 to register on node ip-172-20-54-0.us-west-1.compute.internal
I0709 10:13:15.287729   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0709 10:13:15.338044   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7117","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0709 10:13:15.388754   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null}
I0709 10:13:15.439466   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0709 10:13:15.552329   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7117","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0709 10:13:16.498561   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7117","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null}
STEP: Creating pod
Jul  9 10:13:24.737: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I0709 10:13:24.853401   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-42a376b2-9799-4e0a-b441-4a7e77a64dc7","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0709 10:13:27.508406   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-42a376b2-9799-4e0a-b441-4a7e77a64dc7","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-42a376b2-9799-4e0a-b441-4a7e77a64dc7"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null}
I0709 10:13:30.106846   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0709 10:13:30.157187   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul  9 10:13:30.209: INFO: >>> kubeConfig: /root/.kube/config
I0709 10:13:30.591916   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-42a376b2-9799-4e0a-b441-4a7e77a64dc7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-42a376b2-9799-4e0a-b441-4a7e77a64dc7","storage.kubernetes.io/csiProvisionerIdentity":"1625825595483-8081-csi-mock-csi-mock-volumes-7117"}},"Response":{},"Error":"","FullError":null}
I0709 10:13:31.157463   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0709 10:13:31.209349   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul  9 10:13:31.260: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:31.639: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:32.034: INFO: >>> kubeConfig: /root/.kube/config
I0709 10:13:32.424064   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-42a376b2-9799-4e0a-b441-4a7e77a64dc7/globalmount","target_path":"/var/lib/kubelet/pods/66b663b4-5942-4d3b-b47f-d875669e102c/volumes/kubernetes.io~csi/pvc-42a376b2-9799-4e0a-b441-4a7e77a64dc7/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-42a376b2-9799-4e0a-b441-4a7e77a64dc7","storage.kubernetes.io/csiProvisionerIdentity":"1625825595483-8081-csi-mock-csi-mock-volumes-7117"}},"Response":{},"Error":"","FullError":null}
Jul  9 10:13:38.944: INFO: Deleting pod "pvc-volume-tester-28crq" in namespace "csi-mock-volumes-7117"
Jul  9 10:13:38.997: INFO: Wait up to 5m0s for pod "pvc-volume-tester-28crq" to be fully deleted
Jul  9 10:13:40.685: INFO: >>> kubeConfig: /root/.kube/config
I0709 10:13:41.092794   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/66b663b4-5942-4d3b-b47f-d875669e102c/volumes/kubernetes.io~csi/pvc-42a376b2-9799-4e0a-b441-4a7e77a64dc7/mount"},"Response":{},"Error":"","FullError":null}
I0709 10:13:41.194756   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0709 10:13:41.245903   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-42a376b2-9799-4e0a-b441-4a7e77a64dc7/globalmount"},"Response":{},"Error":"","FullError":null}
I0709 10:13:45.177760   12299 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Jul  9 10:13:46.151: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kgplt", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7117", SelfLink:"", UID:"42a376b2-9799-4e0a-b441-4a7e77a64dc7", ResourceVersion:"5408", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761422404, loc:(*time.Location)(0xa085940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034fa3d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034fa3f0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0034f8580), VolumeMode:(*v1.PersistentVolumeMode)(0xc0034f8590), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  9 10:13:46.152: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kgplt", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7117", SelfLink:"", UID:"42a376b2-9799-4e0a-b441-4a7e77a64dc7", ResourceVersion:"5412", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761422404, loc:(*time.Location)(0xa085940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-54-0.us-west-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003afe2b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003afe2d0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003afe2e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003afe300), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00260e340), VolumeMode:(*v1.PersistentVolumeMode)(0xc00260e350), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  9 10:13:46.152: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kgplt", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7117", SelfLink:"", UID:"42a376b2-9799-4e0a-b441-4a7e77a64dc7", ResourceVersion:"5413", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761422404, loc:(*time.Location)(0xa085940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7117", "volume.kubernetes.io/selected-node":"ip-172-20-54-0.us-west-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0043202a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0043202b8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0043202d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0043202e8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004320300), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004320318), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003ca0670), VolumeMode:(*v1.PersistentVolumeMode)(0xc003ca0680), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  9 10:13:46.152: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kgplt", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7117", SelfLink:"", UID:"42a376b2-9799-4e0a-b441-4a7e77a64dc7", ResourceVersion:"5416", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761422404, loc:(*time.Location)(0xa085940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7117"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004320330), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004320348), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004320360), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004320378), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004320390), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0043203a8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003ca06b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003ca06d0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  9 10:13:46.152: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kgplt", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7117", SelfLink:"", UID:"42a376b2-9799-4e0a-b441-4a7e77a64dc7", ResourceVersion:"5483", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761422404, loc:(*time.Location)(0xa085940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7117", "volume.kubernetes.io/selected-node":"ip-172-20-54-0.us-west-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000626c18), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000626c30), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000626c48), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000626c60), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000626c78), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000626c90), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0033c9fd0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0033c9fe0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023
    exhausted, late binding, with topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":2,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:45.580: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:46.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2492" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:46.176: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-de490bb8-369d-4cf2-b991-3cfff1b09ec8
STEP: Creating a pod to test consume configMaps
Jul  9 10:14:44.174: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-03b92bcb-be81-4ec8-9323-cac7ef74fd0b" in namespace "projected-1896" to be "Succeeded or Failed"
Jul  9 10:14:44.226: INFO: Pod "pod-projected-configmaps-03b92bcb-be81-4ec8-9323-cac7ef74fd0b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.09872ms
Jul  9 10:14:46.278: INFO: Pod "pod-projected-configmaps-03b92bcb-be81-4ec8-9323-cac7ef74fd0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103000801s
STEP: Saw pod success
Jul  9 10:14:46.278: INFO: Pod "pod-projected-configmaps-03b92bcb-be81-4ec8-9323-cac7ef74fd0b" satisfied condition "Succeeded or Failed"
Jul  9 10:14:46.329: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod pod-projected-configmaps-03b92bcb-be81-4ec8-9323-cac7ef74fd0b container agnhost-container: <nil>
STEP: delete the pod
Jul  9 10:14:46.439: INFO: Waiting for pod pod-projected-configmaps-03b92bcb-be81-4ec8-9323-cac7ef74fd0b to disappear
Jul  9 10:14:46.491: INFO: Pod pod-projected-configmaps-03b92bcb-be81-4ec8-9323-cac7ef74fd0b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:46.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1896" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:46.609: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 8 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:375
Jul  9 10:14:46.875: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  9 10:14:46.875: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-22w2
STEP: Creating a pod to test subpath
Jul  9 10:14:46.934: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-22w2" in namespace "provisioning-4041" to be "Succeeded or Failed"
Jul  9 10:14:46.985: INFO: Pod "pod-subpath-test-inlinevolume-22w2": Phase="Pending", Reason="", readiness=false. Elapsed: 51.360453ms
Jul  9 10:14:49.039: INFO: Pod "pod-subpath-test-inlinevolume-22w2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105111574s
Jul  9 10:14:51.092: INFO: Pod "pod-subpath-test-inlinevolume-22w2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157891326s
Jul  9 10:14:53.144: INFO: Pod "pod-subpath-test-inlinevolume-22w2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.210502452s
STEP: Saw pod success
Jul  9 10:14:53.144: INFO: Pod "pod-subpath-test-inlinevolume-22w2" satisfied condition "Succeeded or Failed"
Jul  9 10:14:53.196: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-22w2 container test-container-subpath-inlinevolume-22w2: <nil>
STEP: delete the pod
Jul  9 10:14:53.303: INFO: Waiting for pod pod-subpath-test-inlinevolume-22w2 to disappear
Jul  9 10:14:53.354: INFO: Pod pod-subpath-test-inlinevolume-22w2 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-22w2
Jul  9 10:14:53.355: INFO: Deleting pod "pod-subpath-test-inlinevolume-22w2" in namespace "provisioning-4041"
... skipping 30 lines ...
Jul  9 10:13:45.362: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-3678mbnd
STEP: creating a claim
Jul  9 10:13:45.414: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-2phl
STEP: Creating a pod to test subpath
Jul  9 10:13:45.575: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-2phl" in namespace "provisioning-367" to be "Succeeded or Failed"
Jul  9 10:13:45.627: INFO: Pod "pod-subpath-test-dynamicpv-2phl": Phase="Pending", Reason="", readiness=false. Elapsed: 51.97447ms
Jul  9 10:13:47.680: INFO: Pod "pod-subpath-test-dynamicpv-2phl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105401329s
Jul  9 10:13:49.733: INFO: Pod "pod-subpath-test-dynamicpv-2phl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158088691s
Jul  9 10:13:51.785: INFO: Pod "pod-subpath-test-dynamicpv-2phl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21062415s
Jul  9 10:13:53.842: INFO: Pod "pod-subpath-test-dynamicpv-2phl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26779192s
Jul  9 10:13:55.912: INFO: Pod "pod-subpath-test-dynamicpv-2phl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.337396878s
... skipping 16 lines ...
Jul  9 10:14:30.818: INFO: Pod "pod-subpath-test-dynamicpv-2phl": Phase="Pending", Reason="", readiness=false. Elapsed: 45.243349378s
Jul  9 10:14:32.870: INFO: Pod "pod-subpath-test-dynamicpv-2phl": Phase="Pending", Reason="", readiness=false. Elapsed: 47.295784039s
Jul  9 10:14:34.924: INFO: Pod "pod-subpath-test-dynamicpv-2phl": Phase="Pending", Reason="", readiness=false. Elapsed: 49.349227879s
Jul  9 10:14:36.977: INFO: Pod "pod-subpath-test-dynamicpv-2phl": Phase="Pending", Reason="", readiness=false. Elapsed: 51.402579205s
Jul  9 10:14:39.031: INFO: Pod "pod-subpath-test-dynamicpv-2phl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 53.456313942s
STEP: Saw pod success
Jul  9 10:14:39.031: INFO: Pod "pod-subpath-test-dynamicpv-2phl" satisfied condition "Succeeded or Failed"
Jul  9 10:14:39.083: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-2phl container test-container-volume-dynamicpv-2phl: <nil>
STEP: delete the pod
Jul  9 10:14:39.199: INFO: Waiting for pod pod-subpath-test-dynamicpv-2phl to disappear
Jul  9 10:14:39.251: INFO: Pod pod-subpath-test-dynamicpv-2phl no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-2phl
Jul  9 10:14:39.251: INFO: Deleting pod "pod-subpath-test-dynamicpv-2phl" in namespace "provisioning-367"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":22,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:14:54.897: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":128,"failed":0}
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:14:43.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename certificates
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 138 lines ...
Jul  9 10:14:48.421: INFO: PersistentVolumeClaim pvc-nttz8 found but phase is Pending instead of Bound.
Jul  9 10:14:50.473: INFO: PersistentVolumeClaim pvc-nttz8 found and phase=Bound (10.315329557s)
Jul  9 10:14:50.473: INFO: Waiting up to 3m0s for PersistentVolume local-2666b to have phase Bound
Jul  9 10:14:50.524: INFO: PersistentVolume local-2666b found and phase=Bound (51.49301ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-c776
STEP: Creating a pod to test subpath
Jul  9 10:14:50.680: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c776" in namespace "provisioning-9066" to be "Succeeded or Failed"
Jul  9 10:14:50.732: INFO: Pod "pod-subpath-test-preprovisionedpv-c776": Phase="Pending", Reason="", readiness=false. Elapsed: 52.382061ms
Jul  9 10:14:52.787: INFO: Pod "pod-subpath-test-preprovisionedpv-c776": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106976678s
Jul  9 10:14:54.838: INFO: Pod "pod-subpath-test-preprovisionedpv-c776": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158594393s
STEP: Saw pod success
Jul  9 10:14:54.838: INFO: Pod "pod-subpath-test-preprovisionedpv-c776" satisfied condition "Succeeded or Failed"
Jul  9 10:14:54.889: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-c776 container test-container-volume-preprovisionedpv-c776: <nil>
STEP: delete the pod
Jul  9 10:14:55.002: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c776 to disappear
Jul  9 10:14:55.054: INFO: Pod pod-subpath-test-preprovisionedpv-c776 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-c776
Jul  9 10:14:55.054: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c776" in namespace "provisioning-9066"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":73,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] NodeProblemDetector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
STEP: Destroying namespace "node-problem-detector-2020" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.363 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 31 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:14:57.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3054" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":11,"skipped":83,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:15:00.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6469" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":12,"skipped":84,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:15:00.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  9 10:15:00.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53995296-8e80-48b6-8166-ca0f105ea4ac" in namespace "downward-api-5296" to be "Succeeded or Failed"
Jul  9 10:15:00.661: INFO: Pod "downwardapi-volume-53995296-8e80-48b6-8166-ca0f105ea4ac": Phase="Pending", Reason="", readiness=false. Elapsed: 53.043078ms
Jul  9 10:15:02.713: INFO: Pod "downwardapi-volume-53995296-8e80-48b6-8166-ca0f105ea4ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.104647807s
STEP: Saw pod success
Jul  9 10:15:02.713: INFO: Pod "downwardapi-volume-53995296-8e80-48b6-8166-ca0f105ea4ac" satisfied condition "Succeeded or Failed"
Jul  9 10:15:02.765: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod downwardapi-volume-53995296-8e80-48b6-8166-ca0f105ea4ac container client-container: <nil>
STEP: delete the pod
Jul  9 10:15:02.878: INFO: Waiting for pod downwardapi-volume-53995296-8e80-48b6-8166-ca0f105ea4ac to disappear
Jul  9 10:15:02.929: INFO: Pod downwardapi-volume-53995296-8e80-48b6-8166-ca0f105ea4ac no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:15:02.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5296" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":84,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "apply-3888" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":14,"skipped":88,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
... skipping 44 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":121,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:15:14.597: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
Jul  9 10:15:14.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Jul  9 10:15:14.952: INFO: Waiting up to 5m0s for pod "var-expansion-ddac69e6-d772-4689-b8b9-215b3112637f" in namespace "var-expansion-6187" to be "Succeeded or Failed"
Jul  9 10:15:15.004: INFO: Pod "var-expansion-ddac69e6-d772-4689-b8b9-215b3112637f": Phase="Pending", Reason="", readiness=false. Elapsed: 52.218333ms
Jul  9 10:15:17.056: INFO: Pod "var-expansion-ddac69e6-d772-4689-b8b9-215b3112637f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.10399223s
STEP: Saw pod success
Jul  9 10:15:17.056: INFO: Pod "var-expansion-ddac69e6-d772-4689-b8b9-215b3112637f" satisfied condition "Succeeded or Failed"
Jul  9 10:15:17.107: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod var-expansion-ddac69e6-d772-4689-b8b9-215b3112637f container dapi-container: <nil>
STEP: delete the pod
Jul  9 10:15:17.222: INFO: Waiting for pod var-expansion-ddac69e6-d772-4689-b8b9-215b3112637f to disappear
Jul  9 10:15:17.273: INFO: Pod var-expansion-ddac69e6-d772-4689-b8b9-215b3112637f no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 168 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":47,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":11,"skipped":127,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:14:41.896: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
Jul  9 10:14:42.150: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-7451r7sc4
STEP: creating a claim
Jul  9 10:14:42.201: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-kdg6
STEP: Creating a pod to test exec-volume-test
Jul  9 10:14:42.357: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-kdg6" in namespace "volume-7451" to be "Succeeded or Failed"
Jul  9 10:14:42.407: INFO: Pod "exec-volume-test-dynamicpv-kdg6": Phase="Pending", Reason="", readiness=false. Elapsed: 50.239014ms
Jul  9 10:14:44.457: INFO: Pod "exec-volume-test-dynamicpv-kdg6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100684236s
Jul  9 10:14:46.508: INFO: Pod "exec-volume-test-dynamicpv-kdg6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151247656s
Jul  9 10:14:48.561: INFO: Pod "exec-volume-test-dynamicpv-kdg6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203965664s
Jul  9 10:14:50.612: INFO: Pod "exec-volume-test-dynamicpv-kdg6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.25556006s
Jul  9 10:14:52.663: INFO: Pod "exec-volume-test-dynamicpv-kdg6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.306626789s
Jul  9 10:14:54.714: INFO: Pod "exec-volume-test-dynamicpv-kdg6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.357224992s
STEP: Saw pod success
Jul  9 10:14:54.714: INFO: Pod "exec-volume-test-dynamicpv-kdg6" satisfied condition "Succeeded or Failed"
Jul  9 10:14:54.764: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod exec-volume-test-dynamicpv-kdg6 container exec-container-dynamicpv-kdg6: <nil>
STEP: delete the pod
Jul  9 10:14:54.873: INFO: Waiting for pod exec-volume-test-dynamicpv-kdg6 to disappear
Jul  9 10:14:54.923: INFO: Pod exec-volume-test-dynamicpv-kdg6 no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-kdg6
Jul  9 10:14:54.923: INFO: Deleting pod "exec-volume-test-dynamicpv-kdg6" in namespace "volume-7451"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":12,"skipped":127,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:15:30.726: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 107 lines ...
• [SLOW TEST:5.259 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":138,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:15:36.030: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Jul  9 10:15:27.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jul  9 10:15:27.324: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  9 10:15:27.432: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3912" in namespace "provisioning-3912" to be "Succeeded or Failed"
Jul  9 10:15:27.484: INFO: Pod "hostpath-symlink-prep-provisioning-3912": Phase="Pending", Reason="", readiness=false. Elapsed: 52.369605ms
Jul  9 10:15:29.537: INFO: Pod "hostpath-symlink-prep-provisioning-3912": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.105471301s
STEP: Saw pod success
Jul  9 10:15:29.538: INFO: Pod "hostpath-symlink-prep-provisioning-3912" satisfied condition "Succeeded or Failed"
Jul  9 10:15:29.538: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3912" in namespace "provisioning-3912"
Jul  9 10:15:29.595: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3912" to be fully deleted
Jul  9 10:15:29.647: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-4w4g
STEP: Creating a pod to test subpath
Jul  9 10:15:29.700: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-4w4g" in namespace "provisioning-3912" to be "Succeeded or Failed"
Jul  9 10:15:29.752: INFO: Pod "pod-subpath-test-inlinevolume-4w4g": Phase="Pending", Reason="", readiness=false. Elapsed: 52.396211ms
Jul  9 10:15:31.805: INFO: Pod "pod-subpath-test-inlinevolume-4w4g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105004915s
Jul  9 10:15:33.858: INFO: Pod "pod-subpath-test-inlinevolume-4w4g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158065708s
STEP: Saw pod success
Jul  9 10:15:33.858: INFO: Pod "pod-subpath-test-inlinevolume-4w4g" satisfied condition "Succeeded or Failed"
Jul  9 10:15:33.910: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-4w4g container test-container-subpath-inlinevolume-4w4g: <nil>
STEP: delete the pod
Jul  9 10:15:34.019: INFO: Waiting for pod pod-subpath-test-inlinevolume-4w4g to disappear
Jul  9 10:15:34.070: INFO: Pod pod-subpath-test-inlinevolume-4w4g no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-4w4g
Jul  9 10:15:34.070: INFO: Deleting pod "pod-subpath-test-inlinevolume-4w4g" in namespace "provisioning-3912"
STEP: Deleting pod
Jul  9 10:15:34.122: INFO: Deleting pod "pod-subpath-test-inlinevolume-4w4g" in namespace "provisioning-3912"
Jul  9 10:15:34.244: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3912" in namespace "provisioning-3912" to be "Succeeded or Failed"
Jul  9 10:15:34.296: INFO: Pod "hostpath-symlink-prep-provisioning-3912": Phase="Pending", Reason="", readiness=false. Elapsed: 52.055668ms
Jul  9 10:15:36.350: INFO: Pod "hostpath-symlink-prep-provisioning-3912": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.105946124s
STEP: Saw pod success
Jul  9 10:15:36.350: INFO: Pod "hostpath-symlink-prep-provisioning-3912" satisfied condition "Succeeded or Failed"
Jul  9 10:15:36.350: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3912" in namespace "provisioning-3912"
Jul  9 10:15:36.406: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3912" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:15:36.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3912" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":61,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Jul  9 10:14:30.925: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-2548hnx4v
STEP: creating a claim
Jul  9 10:14:30.976: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-7fd4
STEP: Creating a pod to test atomic-volume-subpath
Jul  9 10:14:31.140: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-7fd4" in namespace "provisioning-2548" to be "Succeeded or Failed"
Jul  9 10:14:31.199: INFO: Pod "pod-subpath-test-dynamicpv-7fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 59.104962ms
Jul  9 10:14:33.250: INFO: Pod "pod-subpath-test-dynamicpv-7fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109986205s
Jul  9 10:14:35.300: INFO: Pod "pod-subpath-test-dynamicpv-7fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160260552s
Jul  9 10:14:37.351: INFO: Pod "pod-subpath-test-dynamicpv-7fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210920634s
Jul  9 10:14:39.401: INFO: Pod "pod-subpath-test-dynamicpv-7fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.261039749s
Jul  9 10:14:41.452: INFO: Pod "pod-subpath-test-dynamicpv-7fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.312267296s
... skipping 7 lines ...
Jul  9 10:14:57.862: INFO: Pod "pod-subpath-test-dynamicpv-7fd4": Phase="Running", Reason="", readiness=true. Elapsed: 26.722516767s
Jul  9 10:14:59.921: INFO: Pod "pod-subpath-test-dynamicpv-7fd4": Phase="Running", Reason="", readiness=true. Elapsed: 28.780851846s
Jul  9 10:15:01.972: INFO: Pod "pod-subpath-test-dynamicpv-7fd4": Phase="Running", Reason="", readiness=true. Elapsed: 30.831909277s
Jul  9 10:15:04.024: INFO: Pod "pod-subpath-test-dynamicpv-7fd4": Phase="Running", Reason="", readiness=true. Elapsed: 32.884097354s
Jul  9 10:15:06.075: INFO: Pod "pod-subpath-test-dynamicpv-7fd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.935443739s
STEP: Saw pod success
Jul  9 10:15:06.075: INFO: Pod "pod-subpath-test-dynamicpv-7fd4" satisfied condition "Succeeded or Failed"
Jul  9 10:15:06.126: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-7fd4 container test-container-subpath-dynamicpv-7fd4: <nil>
STEP: delete the pod
Jul  9 10:15:06.236: INFO: Waiting for pod pod-subpath-test-dynamicpv-7fd4 to disappear
Jul  9 10:15:06.285: INFO: Pod pod-subpath-test-dynamicpv-7fd4 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-7fd4
Jul  9 10:15:06.286: INFO: Deleting pod "pod-subpath-test-dynamicpv-7fd4" in namespace "provisioning-2548"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:15:37.062: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Jul  9 10:15:37.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul  9 10:15:37.381: INFO: Waiting up to 5m0s for pod "security-context-bc602ccf-23aa-4c0f-b77a-b029cfe88dd4" in namespace "security-context-7221" to be "Succeeded or Failed"
Jul  9 10:15:37.431: INFO: Pod "security-context-bc602ccf-23aa-4c0f-b77a-b029cfe88dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 49.756513ms
Jul  9 10:15:39.482: INFO: Pod "security-context-bc602ccf-23aa-4c0f-b77a-b029cfe88dd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.100623581s
STEP: Saw pod success
Jul  9 10:15:39.482: INFO: Pod "security-context-bc602ccf-23aa-4c0f-b77a-b029cfe88dd4" satisfied condition "Succeeded or Failed"
Jul  9 10:15:39.532: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod security-context-bc602ccf-23aa-4c0f-b77a-b029cfe88dd4 container test-container: <nil>
STEP: delete the pod
Jul  9 10:15:39.639: INFO: Waiting for pod security-context-bc602ccf-23aa-4c0f-b77a-b029cfe88dd4 to disappear
Jul  9 10:15:39.689: INFO: Pod security-context-bc602ccf-23aa-4c0f-b77a-b029cfe88dd4 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:15:39.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-7221" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:15:42.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3908" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":39,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:15:42.575: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 907 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when the NodeLease feature is enabled
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49
    the kubelet should report node status infrequently
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":3,"skipped":38,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:15:46.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:15:47.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9358" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:15:47.865: INFO: Driver aws doesn't publish storage capacity -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 137 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":27,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:14:53.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
STEP: Registering slow webhook via the AdmissionRegistration API
Jul  9 10:15:08.622: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:15:18.829: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:15:29.028: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:15:39.234: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:15:49.340: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:15:49.340: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 472 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  9 10:15:49.340: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2188
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":3,"skipped":27,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:15:54.294: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  9 10:14:19.737: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  9 10:14:19.789: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:52.036: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-1426-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-4266.svc:9443/crdconvert?timeout=30s": dial tcp 100.70.8.195:9443: i/o timeout
Jul  9 10:15:22.193: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-1426-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-4266.svc:9443/crdconvert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jul  9 10:15:52.246: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-1426-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-4266.svc:9443/crdconvert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jul  9 10:15:52.247: FAIL: Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 251 lines ...
• Failure [101.304 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  9 10:15:52.247: Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:499
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":13,"skipped":128,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:14:55.221: INFO: >>> kubeConfig: /root/.kube/config
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":14,"skipped":128,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:15:56.774: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":130,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:15:17.389: INFO: >>> kubeConfig: /root/.kube/config
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":11,"skipped":130,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:15:58.104: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":15,"skipped":90,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:15:35.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 35 lines ...
Jul  9 10:13:20.986: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.26:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:20.986: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:21.401: INFO: Found all 1 expected endpoints: [netserver-0]
Jul  9 10:13:21.401: INFO: Going to poll 100.96.2.29 on port 8083 at least 0 times, with a maximum of 46 tries before failing
Jul  9 10:13:21.450: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:21.450: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:22.832: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:13:22.832: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:13:24.881: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:24.881: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:26.285: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:13:26.285: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:13:28.336: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:28.336: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:29.728: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:13:29.728: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:13:31.778: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:31.778: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:33.165: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:13:33.165: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:13:35.214: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:35.214: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:36.600: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:13:36.601: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:13:38.650: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:38.650: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:40.060: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:13:40.060: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:13:42.111: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:42.111: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:43.483: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:13:43.483: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:13:45.533: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:45.533: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:46.906: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:13:46.906: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:13:48.957: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:48.957: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:50.328: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:13:50.328: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:13:52.378: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:52.378: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:53.765: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:13:53.765: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:13:55.816: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:55.816: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:13:57.185: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:13:57.185: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:13:59.235: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:13:59.235: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:00.644: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:00.644: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:02.695: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:02.695: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:04.150: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:04.150: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:06.199: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:06.200: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:07.609: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:07.609: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:09.659: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:09.659: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:11.044: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:11.044: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:13.094: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:13.094: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:14.487: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:14.487: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:16.538: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:16.538: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:17.918: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:17.918: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:19.967: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:19.967: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:21.329: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:21.329: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:23.379: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:23.380: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:24.794: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:24.794: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:26.843: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:26.844: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:28.240: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:28.240: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:30.290: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:30.290: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:31.672: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:31.672: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:33.725: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:33.725: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:35.107: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:35.107: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:37.157: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:37.157: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:38.550: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:38.550: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:40.600: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:40.600: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:41.984: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:41.984: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:44.036: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:44.036: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:45.412: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:45.412: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:47.462: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:47.462: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:48.859: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:48.859: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:50.909: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:50.909: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:52.322: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:52.322: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:54.372: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:54.372: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:55.791: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:55.791: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:14:57.841: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:14:57.841: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:14:59.217: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:14:59.217: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:01.277: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:01.277: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:02.664: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:02.664: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:04.714: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:04.714: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:06.081: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:06.081: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:08.131: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:08.131: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:09.511: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:09.511: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:11.564: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:11.564: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:12.951: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:12.951: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:15.003: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:15.003: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:16.385: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:16.385: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:18.437: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:18.438: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:19.806: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:19.806: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:21.857: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:21.857: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:23.237: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:23.237: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:25.288: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:25.289: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:26.703: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:26.703: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:28.754: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:28.754: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:30.131: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:30.131: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:32.181: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:32.181: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:33.558: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:33.558: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:35.609: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:35.609: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:37.013: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:37.013: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:39.064: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:39.064: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:40.440: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:40.440: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:42.490: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:42.490: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:43.885: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:43.885: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:45.936: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:45.936: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:47.331: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:47.331: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:49.382: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:49.382: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:50.812: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:50.812: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:52.877: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:52.877: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:54.287: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:54.287: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:56.341: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6301 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  9 10:15:56.341: INFO: >>> kubeConfig: /root/.kube/config
Jul  9 10:15:57.725: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  9 10:15:57.725: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Jul  9 10:15:59.726: INFO: 
Output of kubectl describe pod pod-network-test-6301/netserver-0:

Jul  9 10:15:59.726: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=pod-network-test-6301 describe pod netserver-0 --namespace=pod-network-test-6301'
Jul  9 10:16:00.068: INFO: stderr: ""
... skipping 237 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  3m7s  default-scheduler  Successfully assigned pod-network-test-6301/netserver-3 to ip-172-20-55-238.us-west-1.compute.internal
  Normal  Pulled     3m6s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    3m6s  kubelet            Created container webserver
  Normal  Started    3m6s  kubelet            Started container webserver

Jul  9 10:16:01.070: FAIL: Error dialing HTTP node to pod failed to find expected endpoints, 
tries 46
Command curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName
retrieved map[]
expected map[netserver-1:{}]

Full Stack Trace
... skipping 290 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul  9 10:16:01.070: Error dialing HTTP node to pod failed to find expected endpoints, 
    tries 46
    Command curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.2.29:8083/hostName
    retrieved map[]
    expected map[netserver-1:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:16:03.725: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":4,"skipped":49,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:15:55.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:11.729 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":5,"skipped":49,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:16:03.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-ac09ab11-fc42-4d41-abc5-c203d5065c88
STEP: Creating a pod to test consume secrets
Jul  9 10:16:04.358: INFO: Waiting up to 5m0s for pod "pod-secrets-f6e6a0c4-571c-4fec-9de4-eb9df621c638" in namespace "secrets-6026" to be "Succeeded or Failed"
Jul  9 10:16:04.409: INFO: Pod "pod-secrets-f6e6a0c4-571c-4fec-9de4-eb9df621c638": Phase="Pending", Reason="", readiness=false. Elapsed: 50.61154ms
Jul  9 10:16:06.468: INFO: Pod "pod-secrets-f6e6a0c4-571c-4fec-9de4-eb9df621c638": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.109002129s
STEP: Saw pod success
Jul  9 10:16:06.468: INFO: Pod "pod-secrets-f6e6a0c4-571c-4fec-9de4-eb9df621c638" satisfied condition "Succeeded or Failed"
Jul  9 10:16:06.518: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod pod-secrets-f6e6a0c4-571c-4fec-9de4-eb9df621c638 container secret-volume-test: <nil>
STEP: delete the pod
Jul  9 10:16:06.626: INFO: Waiting for pod pod-secrets-f6e6a0c4-571c-4fec-9de4-eb9df621c638 to disappear
Jul  9 10:16:06.678: INFO: Pod pod-secrets-f6e6a0c4-571c-4fec-9de4-eb9df621c638 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:16:06.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6026" for this suite.
STEP: Destroying namespace "secret-namespace-374" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":36,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:16:06.841: INFO: Only supported for providers [azure] (not aws)
... skipping 58 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":14,"skipped":80,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:15:22.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 92 lines ...
• [SLOW TEST:46.038 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:133
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":15,"skipped":80,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:16:08.891: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 102 lines ...
Jul  9 10:16:06.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  9 10:16:07.159: INFO: Waiting up to 5m0s for pod "pod-f276b773-0729-496c-b966-2a81e8460045" in namespace "emptydir-4603" to be "Succeeded or Failed"
Jul  9 10:16:07.210: INFO: Pod "pod-f276b773-0729-496c-b966-2a81e8460045": Phase="Pending", Reason="", readiness=false. Elapsed: 50.497145ms
Jul  9 10:16:09.261: INFO: Pod "pod-f276b773-0729-496c-b966-2a81e8460045": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.102175576s
STEP: Saw pod success
Jul  9 10:16:09.262: INFO: Pod "pod-f276b773-0729-496c-b966-2a81e8460045" satisfied condition "Succeeded or Failed"
Jul  9 10:16:09.314: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod pod-f276b773-0729-496c-b966-2a81e8460045 container test-container: <nil>
STEP: delete the pod
Jul  9 10:16:09.422: INFO: Waiting for pod pod-f276b773-0729-496c-b966-2a81e8460045 to disappear
Jul  9 10:16:09.472: INFO: Pod pod-f276b773-0729-496c-b966-2a81e8460045 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:16:09.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4603" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":38,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 8 lines ...
Jul  9 10:15:42.839: INFO: Creating resource for dynamic PV
Jul  9 10:15:42.839: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-1954tfrcr
STEP: creating a claim
STEP: Expanding non-expandable pvc
Jul  9 10:15:42.993: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul  9 10:15:43.094: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:15:45.196: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:15:47.196: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:15:49.197: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:15:51.196: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:15:53.196: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:15:55.198: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:15:57.195: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:15:59.198: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:16:01.196: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:16:03.196: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:16:05.207: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:16:07.196: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:16:09.196: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:16:11.195: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:16:13.198: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1954tfrcr",
  	... // 3 identical fields
  }

Jul  9 10:16:13.299: INFO: Error updating pvc awsppdjj: PersistentVolumeClaim "awsppdjj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":10,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Jul  9 10:16:04.094: INFO: PersistentVolumeClaim pvc-dj8ph found but phase is Pending instead of Bound.
Jul  9 10:16:06.161: INFO: PersistentVolumeClaim pvc-dj8ph found and phase=Bound (4.176010597s)
Jul  9 10:16:06.161: INFO: Waiting up to 3m0s for PersistentVolume local-dxj42 to have phase Bound
Jul  9 10:16:06.245: INFO: PersistentVolume local-dxj42 found and phase=Bound (84.660365ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-w6kf
STEP: Creating a pod to test subpath
Jul  9 10:16:06.478: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-w6kf" in namespace "provisioning-2795" to be "Succeeded or Failed"
Jul  9 10:16:06.529: INFO: Pod "pod-subpath-test-preprovisionedpv-w6kf": Phase="Pending", Reason="", readiness=false. Elapsed: 51.151979ms
Jul  9 10:16:08.581: INFO: Pod "pod-subpath-test-preprovisionedpv-w6kf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103328954s
Jul  9 10:16:10.634: INFO: Pod "pod-subpath-test-preprovisionedpv-w6kf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156246792s
Jul  9 10:16:12.686: INFO: Pod "pod-subpath-test-preprovisionedpv-w6kf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.208241833s
STEP: Saw pod success
Jul  9 10:16:12.686: INFO: Pod "pod-subpath-test-preprovisionedpv-w6kf" satisfied condition "Succeeded or Failed"
Jul  9 10:16:12.741: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-w6kf container test-container-volume-preprovisionedpv-w6kf: <nil>
STEP: delete the pod
Jul  9 10:16:12.852: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w6kf to disappear
Jul  9 10:16:12.903: INFO: Pod pod-subpath-test-preprovisionedpv-w6kf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-w6kf
Jul  9 10:16:12.903: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-w6kf" in namespace "provisioning-2795"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":16,"skipped":92,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":61,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:16:16.512: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 64 lines ...
Jul  9 10:16:18.772: INFO: PersistentVolumeClaim pvc-28bh4 found but phase is Pending instead of Bound.
Jul  9 10:16:20.823: INFO: PersistentVolumeClaim pvc-28bh4 found and phase=Bound (8.255946482s)
Jul  9 10:16:20.823: INFO: Waiting up to 3m0s for PersistentVolume local-mzbfp to have phase Bound
Jul  9 10:16:20.874: INFO: PersistentVolume local-mzbfp found and phase=Bound (50.403826ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pgqd
STEP: Creating a pod to test subpath
Jul  9 10:16:21.029: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pgqd" in namespace "provisioning-8237" to be "Succeeded or Failed"
Jul  9 10:16:21.080: INFO: Pod "pod-subpath-test-preprovisionedpv-pgqd": Phase="Pending", Reason="", readiness=false. Elapsed: 50.610536ms
Jul  9 10:16:23.131: INFO: Pod "pod-subpath-test-preprovisionedpv-pgqd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.101699693s
STEP: Saw pod success
Jul  9 10:16:23.131: INFO: Pod "pod-subpath-test-preprovisionedpv-pgqd" satisfied condition "Succeeded or Failed"
Jul  9 10:16:23.182: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-pgqd container test-container-volume-preprovisionedpv-pgqd: <nil>
STEP: delete the pod
Jul  9 10:16:23.290: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pgqd to disappear
Jul  9 10:16:23.340: INFO: Pod pod-subpath-test-preprovisionedpv-pgqd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pgqd
Jul  9 10:16:23.340: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pgqd" in namespace "provisioning-8237"
... skipping 86 lines ...
Jul  9 10:16:00.527: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  9 10:16:00.590: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathvkxwb] to have phase Bound
Jul  9 10:16:00.644: INFO: PersistentVolumeClaim csi-hostpathvkxwb found but phase is Pending instead of Bound.
Jul  9 10:16:02.703: INFO: PersistentVolumeClaim csi-hostpathvkxwb found and phase=Bound (2.112389323s)
STEP: Creating pod pod-subpath-test-dynamicpv-z944
STEP: Creating a pod to test subpath
Jul  9 10:16:02.863: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-z944" in namespace "provisioning-6958" to be "Succeeded or Failed"
Jul  9 10:16:02.914: INFO: Pod "pod-subpath-test-dynamicpv-z944": Phase="Pending", Reason="", readiness=false. Elapsed: 51.139082ms
Jul  9 10:16:04.966: INFO: Pod "pod-subpath-test-dynamicpv-z944": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103675682s
Jul  9 10:16:07.019: INFO: Pod "pod-subpath-test-dynamicpv-z944": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156037744s
Jul  9 10:16:09.071: INFO: Pod "pod-subpath-test-dynamicpv-z944": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207929311s
Jul  9 10:16:11.122: INFO: Pod "pod-subpath-test-dynamicpv-z944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.259643645s
STEP: Saw pod success
Jul  9 10:16:11.122: INFO: Pod "pod-subpath-test-dynamicpv-z944" satisfied condition "Succeeded or Failed"
Jul  9 10:16:11.174: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-z944 container test-container-volume-dynamicpv-z944: <nil>
STEP: delete the pod
Jul  9 10:16:11.289: INFO: Waiting for pod pod-subpath-test-dynamicpv-z944 to disappear
Jul  9 10:16:11.340: INFO: Pod pod-subpath-test-dynamicpv-z944 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-z944
Jul  9 10:16:11.340: INFO: Deleting pod "pod-subpath-test-dynamicpv-z944" in namespace "provisioning-6958"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":12,"skipped":133,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:16:25.991: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 107 lines ...
Jul  9 10:16:18.569: INFO: PersistentVolumeClaim pvc-tpqm2 found but phase is Pending instead of Bound.
Jul  9 10:16:20.620: INFO: PersistentVolumeClaim pvc-tpqm2 found and phase=Bound (8.265784748s)
Jul  9 10:16:20.620: INFO: Waiting up to 3m0s for PersistentVolume local-f5kbv to have phase Bound
Jul  9 10:16:20.672: INFO: PersistentVolume local-f5kbv found and phase=Bound (51.160319ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2krg
STEP: Creating a pod to test subpath
Jul  9 10:16:20.829: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2krg" in namespace "provisioning-7777" to be "Succeeded or Failed"
Jul  9 10:16:20.881: INFO: Pod "pod-subpath-test-preprovisionedpv-2krg": Phase="Pending", Reason="", readiness=false. Elapsed: 51.25719ms
Jul  9 10:16:22.933: INFO: Pod "pod-subpath-test-preprovisionedpv-2krg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103753532s
Jul  9 10:16:24.987: INFO: Pod "pod-subpath-test-preprovisionedpv-2krg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15718905s
STEP: Saw pod success
Jul  9 10:16:24.987: INFO: Pod "pod-subpath-test-preprovisionedpv-2krg" satisfied condition "Succeeded or Failed"
Jul  9 10:16:25.039: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-2krg container test-container-subpath-preprovisionedpv-2krg: <nil>
STEP: delete the pod
Jul  9 10:16:25.156: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2krg to disappear
Jul  9 10:16:25.209: INFO: Pod pod-subpath-test-preprovisionedpv-2krg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2krg
Jul  9 10:16:25.209: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2krg" in namespace "provisioning-7777"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":16,"skipped":91,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
Jul  9 10:15:48.947: INFO: PersistentVolumeClaim pvc-f7vd7 found but phase is Pending instead of Bound.
Jul  9 10:15:51.001: INFO: PersistentVolumeClaim pvc-f7vd7 found and phase=Bound (2.105220581s)
Jul  9 10:15:51.001: INFO: Waiting up to 3m0s for PersistentVolume aws-k87n6 to have phase Bound
Jul  9 10:15:51.052: INFO: PersistentVolume aws-k87n6 found and phase=Bound (51.434839ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-c277
STEP: Creating a pod to test exec-volume-test
Jul  9 10:15:51.208: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-c277" in namespace "volume-7080" to be "Succeeded or Failed"
Jul  9 10:15:51.261: INFO: Pod "exec-volume-test-preprovisionedpv-c277": Phase="Pending", Reason="", readiness=false. Elapsed: 52.586705ms
Jul  9 10:15:53.320: INFO: Pod "exec-volume-test-preprovisionedpv-c277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111367021s
Jul  9 10:15:55.373: INFO: Pod "exec-volume-test-preprovisionedpv-c277": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1639879s
Jul  9 10:15:57.424: INFO: Pod "exec-volume-test-preprovisionedpv-c277": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215651322s
Jul  9 10:15:59.477: INFO: Pod "exec-volume-test-preprovisionedpv-c277": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26817276s
Jul  9 10:16:01.529: INFO: Pod "exec-volume-test-preprovisionedpv-c277": Phase="Pending", Reason="", readiness=false. Elapsed: 10.319999523s
Jul  9 10:16:03.580: INFO: Pod "exec-volume-test-preprovisionedpv-c277": Phase="Pending", Reason="", readiness=false. Elapsed: 12.371754368s
Jul  9 10:16:05.633: INFO: Pod "exec-volume-test-preprovisionedpv-c277": Phase="Pending", Reason="", readiness=false. Elapsed: 14.424651864s
Jul  9 10:16:07.686: INFO: Pod "exec-volume-test-preprovisionedpv-c277": Phase="Pending", Reason="", readiness=false. Elapsed: 16.477064347s
Jul  9 10:16:09.738: INFO: Pod "exec-volume-test-preprovisionedpv-c277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.529228614s
STEP: Saw pod success
Jul  9 10:16:09.738: INFO: Pod "exec-volume-test-preprovisionedpv-c277" satisfied condition "Succeeded or Failed"
Jul  9 10:16:09.789: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-c277 container exec-container-preprovisionedpv-c277: <nil>
STEP: delete the pod
Jul  9 10:16:09.900: INFO: Waiting for pod exec-volume-test-preprovisionedpv-c277 to disappear
Jul  9 10:16:09.951: INFO: Pod exec-volume-test-preprovisionedpv-c277 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-c277
Jul  9 10:16:09.952: INFO: Deleting pod "exec-volume-test-preprovisionedpv-c277" in namespace "volume-7080"
STEP: Deleting pv and pvc
Jul  9 10:16:10.003: INFO: Deleting PersistentVolumeClaim "pvc-f7vd7"
Jul  9 10:16:10.056: INFO: Deleting PersistentVolume "aws-k87n6"
Jul  9 10:16:10.257: INFO: Couldn't delete PD "aws://us-west-1a/vol-0f163b069555e6850", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f163b069555e6850 is currently attached to i-0626b22a09f5992cc
	status code: 400, request id: 6b9e1b48-c26e-424f-a578-81d671bf1832
Jul  9 10:16:15.586: INFO: Couldn't delete PD "aws://us-west-1a/vol-0f163b069555e6850", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f163b069555e6850 is currently attached to i-0626b22a09f5992cc
	status code: 400, request id: 57d87559-803d-412e-ad7f-7e3ab1c53201
Jul  9 10:16:20.908: INFO: Couldn't delete PD "aws://us-west-1a/vol-0f163b069555e6850", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f163b069555e6850 is currently attached to i-0626b22a09f5992cc
	status code: 400, request id: 7b524dbb-706d-49b9-bc68-a42d3f2a74ba
Jul  9 10:16:26.217: INFO: Couldn't delete PD "aws://us-west-1a/vol-0f163b069555e6850", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f163b069555e6850 is currently attached to i-0626b22a09f5992cc
	status code: 400, request id: 1f016898-d756-420c-9eca-167f59e7c6e9
Jul  9 10:16:31.589: INFO: Successfully deleted PD "aws://us-west-1a/vol-0f163b069555e6850".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:16:31.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7080" for this suite.
... skipping 102 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":15,"skipped":137,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
Jul  9 10:14:23.662: INFO: successfully validated that service sourceip-test in namespace services-3180 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Jul  9 10:14:23.765: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761422463, loc:(*time.Location)(0xa085940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761422463, loc:(*time.Location)(0xa085940)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761422463, loc:(*time.Location)(0xa085940)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761422463, loc:(*time.Location)(0xa085940)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-7dd78944dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  9 10:14:25.920: INFO: Waiting up to 2m0s to get response from 100.71.215.251:8080
Jul  9 10:14:25.921: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip'
Jul  9 10:14:56.525: INFO: rc: 28
Jul  9 10:14:56.525: INFO: got err: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Jul  9 10:14:58.526: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip'
Jul  9 10:15:29.147: INFO: rc: 28
Jul  9 10:15:29.147: INFO: got err: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Jul  9 10:15:31.148: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip'
Jul  9 10:16:01.804: INFO: rc: 28
Jul  9 10:16:01.804: INFO: got err: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Jul  9 10:16:03.805: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip'
Jul  9 10:16:34.458: INFO: rc: 28
Jul  9 10:16:34.458: INFO: got err: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Jul  9 10:16:36.459: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28",
        },
        Code: 28,
    }
    error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip:
    Command stdout:
    
    stderr:
    + curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip
    command terminated with exit code 28
    
    error:
    exit status 28
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.execSourceIPTest(0x0, 0x0, 0x0, 0x0, 0xc0033d9f60, 0x1a, 0xc00306f8c0, 0x15, 0xc003e30c10, 0xd, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133 +0x4d9
... skipping 239 lines ...
• Failure [141.103 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:924

  Jul  9 10:16:36.459: Unexpected error:
      <exec.CodeExitError>: {
          Err: {
              s: "error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28",
          },
          Code: 28,
      }
      error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3180 exec pause-pod-7dd78944dc-gmkls -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip:
      Command stdout:
      
      stderr:
      + curl -q -s --connect-timeout 30 100.71.215.251:8080/clientip
      command terminated with exit code 28
      
      error:
      exit status 28
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133
------------------------------
{"msg":"FAILED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":4,"skipped":50,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 124 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":14,"skipped":148,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 18 lines ...
Jul  9 10:16:33.164: INFO: PersistentVolumeClaim pvc-hf2mc found but phase is Pending instead of Bound.
Jul  9 10:16:35.216: INFO: PersistentVolumeClaim pvc-hf2mc found and phase=Bound (6.206809526s)
Jul  9 10:16:35.216: INFO: Waiting up to 3m0s for PersistentVolume local-gdf9b to have phase Bound
Jul  9 10:16:35.268: INFO: PersistentVolume local-gdf9b found and phase=Bound (51.190226ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-htwx
STEP: Creating a pod to test exec-volume-test
Jul  9 10:16:35.424: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-htwx" in namespace "volume-5934" to be "Succeeded or Failed"
Jul  9 10:16:35.475: INFO: Pod "exec-volume-test-preprovisionedpv-htwx": Phase="Pending", Reason="", readiness=false. Elapsed: 51.617874ms
Jul  9 10:16:37.528: INFO: Pod "exec-volume-test-preprovisionedpv-htwx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104241653s
Jul  9 10:16:39.582: INFO: Pod "exec-volume-test-preprovisionedpv-htwx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158586689s
STEP: Saw pod success
Jul  9 10:16:39.582: INFO: Pod "exec-volume-test-preprovisionedpv-htwx" satisfied condition "Succeeded or Failed"
Jul  9 10:16:39.634: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-htwx container exec-container-preprovisionedpv-htwx: <nil>
STEP: delete the pod
Jul  9 10:16:39.743: INFO: Waiting for pod exec-volume-test-preprovisionedpv-htwx to disappear
Jul  9 10:16:39.795: INFO: Pod exec-volume-test-preprovisionedpv-htwx no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-htwx
Jul  9 10:16:39.795: INFO: Deleting pod "exec-volume-test-preprovisionedpv-htwx" in namespace "volume-5934"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":13,"skipped":145,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:16:40.904: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 79 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":46,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:16:31.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a failing command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:505
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command","total":-1,"completed":6,"skipped":46,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":42,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:16:24.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
• [SLOW TEST:27.871 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":7,"skipped":42,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:16:52.037: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 85 lines ...
Jul  9 10:16:48.058: INFO: PersistentVolumeClaim pvc-bj882 found but phase is Pending instead of Bound.
Jul  9 10:16:50.108: INFO: PersistentVolumeClaim pvc-bj882 found and phase=Bound (10.306060542s)
Jul  9 10:16:50.108: INFO: Waiting up to 3m0s for PersistentVolume local-rdj5z to have phase Bound
Jul  9 10:16:50.159: INFO: PersistentVolume local-rdj5z found and phase=Bound (50.158512ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-cgbg
STEP: Creating a pod to test subpath
Jul  9 10:16:50.310: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cgbg" in namespace "provisioning-7259" to be "Succeeded or Failed"
Jul  9 10:16:50.361: INFO: Pod "pod-subpath-test-preprovisionedpv-cgbg": Phase="Pending", Reason="", readiness=false. Elapsed: 50.128904ms
Jul  9 10:16:52.411: INFO: Pod "pod-subpath-test-preprovisionedpv-cgbg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100861306s
Jul  9 10:16:54.462: INFO: Pod "pod-subpath-test-preprovisionedpv-cgbg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151760022s
STEP: Saw pod success
Jul  9 10:16:54.462: INFO: Pod "pod-subpath-test-preprovisionedpv-cgbg" satisfied condition "Succeeded or Failed"
Jul  9 10:16:54.513: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-cgbg container test-container-volume-preprovisionedpv-cgbg: <nil>
STEP: delete the pod
Jul  9 10:16:54.620: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cgbg to disappear
Jul  9 10:16:54.670: INFO: Pod pod-subpath-test-preprovisionedpv-cgbg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-cgbg
Jul  9 10:16:54.670: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cgbg" in namespace "provisioning-7259"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":16,"skipped":141,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:16:55.449: INFO: Only supported for providers [openstack] (not aws)
... skipping 119 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:913
    should check if kubectl can dry-run update Pods [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":7,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:16:55.526: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 57 lines ...
• [SLOW TEST:17.241 seconds]
[sig-network] HostPort
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":149,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:16:57.740: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 43 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:16:55.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:249
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:16:57.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4453" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":8,"skipped":52,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:16:58.034: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":92,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:16:31.297: INFO: >>> kubeConfig: /root/.kube/config
... skipping 19 lines ...
Jul  9 10:16:48.698: INFO: PersistentVolumeClaim pvc-4qpcj found but phase is Pending instead of Bound.
Jul  9 10:16:50.749: INFO: PersistentVolumeClaim pvc-4qpcj found and phase=Bound (16.467238785s)
Jul  9 10:16:50.749: INFO: Waiting up to 3m0s for PersistentVolume local-9dq7r to have phase Bound
Jul  9 10:16:50.800: INFO: PersistentVolume local-9dq7r found and phase=Bound (51.375889ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dj9g
STEP: Creating a pod to test subpath
Jul  9 10:16:50.956: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dj9g" in namespace "provisioning-4603" to be "Succeeded or Failed"
Jul  9 10:16:51.008: INFO: Pod "pod-subpath-test-preprovisionedpv-dj9g": Phase="Pending", Reason="", readiness=false. Elapsed: 51.772188ms
Jul  9 10:16:53.060: INFO: Pod "pod-subpath-test-preprovisionedpv-dj9g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103788908s
Jul  9 10:16:55.113: INFO: Pod "pod-subpath-test-preprovisionedpv-dj9g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156449474s
STEP: Saw pod success
Jul  9 10:16:55.113: INFO: Pod "pod-subpath-test-preprovisionedpv-dj9g" satisfied condition "Succeeded or Failed"
Jul  9 10:16:55.164: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-dj9g container test-container-subpath-preprovisionedpv-dj9g: <nil>
STEP: delete the pod
Jul  9 10:16:55.277: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dj9g to disappear
Jul  9 10:16:55.329: INFO: Pod pod-subpath-test-preprovisionedpv-dj9g no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dj9g
Jul  9 10:16:55.329: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dj9g" in namespace "provisioning-4603"
STEP: Creating pod pod-subpath-test-preprovisionedpv-dj9g
STEP: Creating a pod to test subpath
Jul  9 10:16:55.432: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dj9g" in namespace "provisioning-4603" to be "Succeeded or Failed"
Jul  9 10:16:55.483: INFO: Pod "pod-subpath-test-preprovisionedpv-dj9g": Phase="Pending", Reason="", readiness=false. Elapsed: 51.08559ms
Jul  9 10:16:57.534: INFO: Pod "pod-subpath-test-preprovisionedpv-dj9g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.102485922s
STEP: Saw pod success
Jul  9 10:16:57.534: INFO: Pod "pod-subpath-test-preprovisionedpv-dj9g" satisfied condition "Succeeded or Failed"
Jul  9 10:16:57.585: INFO: Trying to get logs from node ip-172-20-55-238.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-dj9g container test-container-subpath-preprovisionedpv-dj9g: <nil>
STEP: delete the pod
Jul  9 10:16:57.695: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dj9g to disappear
Jul  9 10:16:57.746: INFO: Pod pod-subpath-test-preprovisionedpv-dj9g no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dj9g
Jul  9 10:16:57.746: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dj9g" in namespace "provisioning-4603"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:390
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":18,"skipped":92,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:16:58.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Jul  9 10:16:58.359: INFO: Waiting up to 5m0s for pod "client-containers-529df48e-dfc3-4c83-862b-312a29f71a90" in namespace "containers-6192" to be "Succeeded or Failed"
Jul  9 10:16:58.411: INFO: Pod "client-containers-529df48e-dfc3-4c83-862b-312a29f71a90": Phase="Pending", Reason="", readiness=false. Elapsed: 51.253807ms
Jul  9 10:17:00.464: INFO: Pod "client-containers-529df48e-dfc3-4c83-862b-312a29f71a90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.104365863s
STEP: Saw pod success
Jul  9 10:17:00.464: INFO: Pod "client-containers-529df48e-dfc3-4c83-862b-312a29f71a90" satisfied condition "Succeeded or Failed"
Jul  9 10:17:00.516: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod client-containers-529df48e-dfc3-4c83-862b-312a29f71a90 container agnhost-container: <nil>
STEP: delete the pod
Jul  9 10:17:00.627: INFO: Waiting for pod client-containers-529df48e-dfc3-4c83-862b-312a29f71a90 to disappear
Jul  9 10:17:00.680: INFO: Pod client-containers-529df48e-dfc3-4c83-862b-312a29f71a90 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:17:00.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6192" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:17:00.801: INFO: Only supported for providers [vsphere] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":6,"skipped":21,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:16:00.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-818-crds.webhook.example.com via the AdmissionRegistration API
Jul  9 10:16:15.401: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:16:25.604: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:16:35.806: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:16:46.011: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:16:56.113: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:16:56.114: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000240240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 432 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  9 10:16:56.114: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000240240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":6,"skipped":21,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
STEP: creating replication controller externalip-test in namespace services-3319
I0709 10:14:46.546145   12299 runners.go:190] Created replication controller with name: externalip-test, namespace: services-3319, replica count: 2
I0709 10:14:49.646588   12299 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  9 10:14:49.646: INFO: Creating new exec pod
Jul  9 10:14:54.801: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:15:00.430: INFO: rc: 1
Jul  9 10:15:00.431: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:15:01.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:15:07.160: INFO: rc: 1
Jul  9 10:15:07.160: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:15:07.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:15:13.088: INFO: rc: 1
Jul  9 10:15:13.088: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:15:13.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:15:19.072: INFO: rc: 1
Jul  9 10:15:19.072: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:15:19.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:15:25.071: INFO: rc: 1
Jul  9 10:15:25.071: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:15:25.432: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:15:31.078: INFO: rc: 1
Jul  9 10:15:31.078: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:15:31.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:15:37.047: INFO: rc: 1
Jul  9 10:15:37.047: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:15:37.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:15:43.065: INFO: rc: 1
Jul  9 10:15:43.066: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:15:43.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:15:49.054: INFO: rc: 1
Jul  9 10:15:49.054: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:15:49.432: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:15:55.072: INFO: rc: 1
Jul  9 10:15:55.072: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:15:55.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:16:01.046: INFO: rc: 1
Jul  9 10:16:01.046: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:16:01.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:16:07.093: INFO: rc: 1
Jul  9 10:16:07.093: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalip-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:16:07.432: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:16:13.063: INFO: rc: 1
Jul  9 10:16:13.063: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalip-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:16:13.432: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:16:19.058: INFO: rc: 1
Jul  9 10:16:19.058: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:16:19.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:16:25.070: INFO: rc: 1
Jul  9 10:16:25.070: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ + nc -vecho -t hostName -w
 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:16:25.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:16:31.049: INFO: rc: 1
Jul  9 10:16:31.049: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalip-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:16:31.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:16:37.059: INFO: rc: 1
Jul  9 10:16:37.059: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:16:37.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:16:43.037: INFO: rc: 1
Jul  9 10:16:43.037: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:16:43.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:16:49.068: INFO: rc: 1
Jul  9 10:16:49.068: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:16:49.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:16:55.038: INFO: rc: 1
Jul  9 10:16:55.038: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:16:55.431: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:17:01.054: INFO: rc: 1
Jul  9 10:17:01.054: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalip-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:17:01.054: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jul  9 10:17:06.698: INFO: rc: 1
Jul  9 10:17:06.698: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3319 exec execpodmdccd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:17:06.698: FAIL: Unexpected error:
    <*errors.errorString | 0xc003ca0920>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalip-test:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalip-test:80 over TCP protocol
occurred

... skipping 66918 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:391
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":29,"skipped":287,"failed":4,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:38:06.807: INFO: Driver emptydir doesn't support ext3 -- skipping
... skipping 91 lines ...
Jul  9 10:37:15.831: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-rhlb6] to have phase Bound
Jul  9 10:37:15.880: INFO: PersistentVolumeClaim pvc-rhlb6 found and phase=Bound (49.425212ms)
STEP: Deleting the previously created pod
Jul  9 10:37:26.128: INFO: Deleting pod "pvc-volume-tester-5h8l8" in namespace "csi-mock-volumes-2831"
Jul  9 10:37:26.179: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5h8l8" to be fully deleted
STEP: Checking CSI driver logs
Jul  9 10:37:40.345: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/b6876b92-c395-4c65-a5af-3f5f0420c4ce/volumes/kubernetes.io~csi/pvc-a5adde5b-f55f-4abd-ab82-be4af68ad771/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-5h8l8
Jul  9 10:37:40.345: INFO: Deleting pod "pvc-volume-tester-5h8l8" in namespace "csi-mock-volumes-2831"
STEP: Deleting claim pvc-rhlb6
Jul  9 10:37:40.493: INFO: Waiting up to 2m0s for PersistentVolume pvc-a5adde5b-f55f-4abd-ab82-be4af68ad771 to get deleted
Jul  9 10:37:40.541: INFO: PersistentVolume pvc-a5adde5b-f55f-4abd-ab82-be4af68ad771 was removed
STEP: Deleting storageclass csi-mock-volumes-2831-scwfvwq
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-plsp
STEP: Creating a pod to test atomic-volume-subpath
Jul  9 10:37:56.284: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-plsp" in namespace "subpath-9997" to be "Succeeded or Failed"
Jul  9 10:37:56.335: INFO: Pod "pod-subpath-test-configmap-plsp": Phase="Pending", Reason="", readiness=false. Elapsed: 51.000361ms
Jul  9 10:37:58.388: INFO: Pod "pod-subpath-test-configmap-plsp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103386995s
Jul  9 10:38:00.439: INFO: Pod "pod-subpath-test-configmap-plsp": Phase="Running", Reason="", readiness=true. Elapsed: 4.154776478s
Jul  9 10:38:02.491: INFO: Pod "pod-subpath-test-configmap-plsp": Phase="Running", Reason="", readiness=true. Elapsed: 6.206287637s
Jul  9 10:38:04.542: INFO: Pod "pod-subpath-test-configmap-plsp": Phase="Running", Reason="", readiness=true. Elapsed: 8.257908897s
Jul  9 10:38:06.594: INFO: Pod "pod-subpath-test-configmap-plsp": Phase="Running", Reason="", readiness=true. Elapsed: 10.309751914s
... skipping 2 lines ...
Jul  9 10:38:12.751: INFO: Pod "pod-subpath-test-configmap-plsp": Phase="Running", Reason="", readiness=true. Elapsed: 16.466710121s
Jul  9 10:38:14.803: INFO: Pod "pod-subpath-test-configmap-plsp": Phase="Running", Reason="", readiness=true. Elapsed: 18.518499477s
Jul  9 10:38:16.856: INFO: Pod "pod-subpath-test-configmap-plsp": Phase="Running", Reason="", readiness=true. Elapsed: 20.571312728s
Jul  9 10:38:18.907: INFO: Pod "pod-subpath-test-configmap-plsp": Phase="Running", Reason="", readiness=true. Elapsed: 22.622895869s
Jul  9 10:38:20.959: INFO: Pod "pod-subpath-test-configmap-plsp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.674435326s
STEP: Saw pod success
Jul  9 10:38:20.959: INFO: Pod "pod-subpath-test-configmap-plsp" satisfied condition "Succeeded or Failed"
Jul  9 10:38:21.010: INFO: Trying to get logs from node ip-172-20-54-0.us-west-1.compute.internal pod pod-subpath-test-configmap-plsp container test-container-subpath-configmap-plsp: <nil>
STEP: delete the pod
Jul  9 10:38:21.126: INFO: Waiting for pod pod-subpath-test-configmap-plsp to disappear
Jul  9 10:38:21.177: INFO: Pod pod-subpath-test-configmap-plsp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-plsp
Jul  9 10:38:21.177: INFO: Deleting pod "pod-subpath-test-configmap-plsp" in namespace "subpath-9997"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":153,"failed":1,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: Gathering metrics
Jul  9 10:38:24.305: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:38:24.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-869" for this suite.


• [SLOW TEST:300.888 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":6,"skipped":55,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":24,"skipped":236,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:38:25.811: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:38:27.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5311" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":48,"skipped":311,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:38:16.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":49,"skipped":311,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:38:28.735: INFO: Only supported for providers [gce gke] (not aws)
... skipping 40 lines ...
• [SLOW TEST:23.194 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:280
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":30,"skipped":293,"failed":4,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:38:30.041: INFO: Only supported for providers [azure] (not aws)
... skipping 14 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":7,"skipped":61,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:38:27.880: INFO: >>> kubeConfig: /root/.kube/config
... skipping 13 lines ...
Jul  9 10:38:32.936: INFO: PersistentVolumeClaim pvc-szqkj found but phase is Pending instead of Bound.
Jul  9 10:38:34.987: INFO: PersistentVolumeClaim pvc-szqkj found and phase=Bound (4.154097922s)
Jul  9 10:38:34.987: INFO: Waiting up to 3m0s for PersistentVolume local-64qk9 to have phase Bound
Jul  9 10:38:35.037: INFO: PersistentVolume local-64qk9 found and phase=Bound (50.265766ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xq72
STEP: Creating a pod to test subpath
Jul  9 10:38:35.190: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xq72" in namespace "provisioning-3691" to be "Succeeded or Failed"
Jul  9 10:38:35.241: INFO: Pod "pod-subpath-test-preprovisionedpv-xq72": Phase="Pending", Reason="", readiness=false. Elapsed: 50.408342ms
Jul  9 10:38:37.293: INFO: Pod "pod-subpath-test-preprovisionedpv-xq72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.102535713s
STEP: Saw pod success
Jul  9 10:38:37.293: INFO: Pod "pod-subpath-test-preprovisionedpv-xq72" satisfied condition "Succeeded or Failed"
Jul  9 10:38:37.343: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-xq72 container test-container-subpath-preprovisionedpv-xq72: <nil>
STEP: delete the pod
Jul  9 10:38:37.452: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xq72 to disappear
Jul  9 10:38:37.503: INFO: Pod pod-subpath-test-preprovisionedpv-xq72 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xq72
Jul  9 10:38:37.503: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xq72" in namespace "provisioning-3691"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":61,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:38:38.261: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 149 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":24,"skipped":133,"failed":2,"failures":["[sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:38:40.579: INFO: Only supported for providers [gce gke] (not aws)
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":43,"skipped":276,"failed":2,"failures":["[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:992
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1037
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":31,"skipped":300,"failed":4,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:38:42.693: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:38:44.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5613" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":44,"skipped":279,"failed":2,"failures":["[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

SS
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":9,"skipped":75,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:37:43.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
• [SLOW TEST:62.755 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should observe that the PodDisruptionBudget status is not updated for unmanaged pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:191
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods","total":-1,"completed":10,"skipped":75,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:38:46.163: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:38:49.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4650" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname","total":-1,"completed":45,"skipped":281,"failed":2,"failures":["[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:38:49.187: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":32,"skipped":313,"failed":4,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:38:50.186: INFO: Only supported for providers [gce gke] (not aws)
... skipping 180 lines ...
• [SLOW TEST:30.651 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be ready immediately after startupProbe succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:406
------------------------------
{"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":50,"skipped":316,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:13.954 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":11,"skipped":80,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:39:00.184: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 278 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 20 lines ...
Jul  9 10:39:02.401: INFO: PersistentVolumeClaim pvc-jqps6 found but phase is Pending instead of Bound.
Jul  9 10:39:04.452: INFO: PersistentVolumeClaim pvc-jqps6 found and phase=Bound (2.100287712s)
Jul  9 10:39:04.452: INFO: Waiting up to 3m0s for PersistentVolume local-c2fnv to have phase Bound
Jul  9 10:39:04.502: INFO: PersistentVolume local-c2fnv found and phase=Bound (49.678378ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-srtg
STEP: Creating a pod to test subpath
Jul  9 10:39:04.652: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-srtg" in namespace "provisioning-3305" to be "Succeeded or Failed"
Jul  9 10:39:04.701: INFO: Pod "pod-subpath-test-preprovisionedpv-srtg": Phase="Pending", Reason="", readiness=false. Elapsed: 49.3643ms
Jul  9 10:39:06.752: INFO: Pod "pod-subpath-test-preprovisionedpv-srtg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10019746s
Jul  9 10:39:08.803: INFO: Pod "pod-subpath-test-preprovisionedpv-srtg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151070913s
STEP: Saw pod success
Jul  9 10:39:08.803: INFO: Pod "pod-subpath-test-preprovisionedpv-srtg" satisfied condition "Succeeded or Failed"
Jul  9 10:39:08.852: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-srtg container test-container-subpath-preprovisionedpv-srtg: <nil>
STEP: delete the pod
Jul  9 10:39:08.957: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-srtg to disappear
Jul  9 10:39:09.006: INFO: Pod pod-subpath-test-preprovisionedpv-srtg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-srtg
Jul  9 10:39:09.007: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-srtg" in namespace "provisioning-3305"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:375
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":51,"skipped":318,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:39:09.772: INFO: Only supported for providers [azure] (not aws)
... skipping 135 lines ...
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:39:09.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name secret-emptykey-test-d1cef204-7475-40c2-8958-8bba05a1d0e3
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:39:10.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8196" for this suite.
... skipping 99 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":29,"skipped":280,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:39:12.661: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-385cbcd0-f1ab-4fca-a4d5-f7c514d26839
STEP: Creating a pod to test consume configMaps
Jul  9 10:39:13.049: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3bca48b6-45bf-4e3c-9608-d0ce40364ef3" in namespace "projected-1323" to be "Succeeded or Failed"
Jul  9 10:39:13.100: INFO: Pod "pod-projected-configmaps-3bca48b6-45bf-4e3c-9608-d0ce40364ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 50.887916ms
Jul  9 10:39:15.152: INFO: Pod "pod-projected-configmaps-3bca48b6-45bf-4e3c-9608-d0ce40364ef3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.10285908s
STEP: Saw pod success
Jul  9 10:39:15.152: INFO: Pod "pod-projected-configmaps-3bca48b6-45bf-4e3c-9608-d0ce40364ef3" satisfied condition "Succeeded or Failed"
Jul  9 10:39:15.207: INFO: Trying to get logs from node ip-172-20-42-78.us-west-1.compute.internal pod pod-projected-configmaps-3bca48b6-45bf-4e3c-9608-d0ce40364ef3 container agnhost-container: <nil>
STEP: delete the pod
Jul  9 10:39:15.316: INFO: Waiting for pod pod-projected-configmaps-3bca48b6-45bf-4e3c-9608-d0ce40364ef3 to disappear
Jul  9 10:39:15.368: INFO: Pod pod-projected-configmaps-3bca48b6-45bf-4e3c-9608-d0ce40364ef3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:39:15.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1323" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":286,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:39:15.507: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 110 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":39,"skipped":215,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:39:17.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:39:18.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5851" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":40,"skipped":215,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:39:18.489: INFO: Only supported for providers [gce gke] (not aws)
... skipping 239 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555
    should update a single-container pod's image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":33,"skipped":350,"failed":4,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:28.275 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":25,"skipped":153,"failed":2,"failures":["[sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 67 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":9,"skipped":65,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:39:41.458: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 89 lines ...
STEP: Creating a kubernetes client
Jul  9 10:39:00.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Jul  9 10:39:00.511: INFO: PodSpec: initContainers in spec.initContainers
Jul  9 10:39:44.227: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-82fb4550-31f7-4763-865d-8776ee3130e2", GenerateName:"", Namespace:"init-container-8137", SelfLink:"", UID:"6227c913-c33f-4a2d-aed6-0aec954639d3", ResourceVersion:"44305", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761423940, loc:(*time.Location)(0xa085940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"511807013"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00512d980), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00512d998), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00512d9b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00512d9c8), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-rl66q", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001a673a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-rl66q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-rl66q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-rl66q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004280a18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-20-54-0.us-west-1.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005e8a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004280a90)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004280ab0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004280ab8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004280abc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003e9d2d0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423940, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423940, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423940, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423940, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.54.0", PodIP:"100.96.4.43", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.4.43"}}, StartTime:(*v1.Time)(0xc00512d9f8), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0005e8bd0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0005e8cb0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://a8c2fe07f8bfaa1d6d070e5aafa855e65d3e9d14645b41d333c1011156b3e42d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001a67580), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001a67520), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc004280b3f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:39:44.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8137" for this suite.


• [SLOW TEST:44.081 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":12,"skipped":98,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:5.023 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1007
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":13,"skipped":108,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:39:49.456: INFO: Only supported for providers [gce gke] (not aws)
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":41,"skipped":271,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:39:50.039: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 138 lines ...
• [SLOW TEST:6.609 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":14,"skipped":124,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-instrumentation] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:39:56.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9851" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":15,"skipped":126,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:39:56.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  9 10:39:57.141: INFO: Waiting up to 5m0s for pod "busybox-user-65534-cbba1aa9-3140-4726-be71-8387ae705c6e" in namespace "security-context-test-3443" to be "Succeeded or Failed"
Jul  9 10:39:57.192: INFO: Pod "busybox-user-65534-cbba1aa9-3140-4726-be71-8387ae705c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 51.570309ms
Jul  9 10:39:59.245: INFO: Pod "busybox-user-65534-cbba1aa9-3140-4726-be71-8387ae705c6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.104585165s
Jul  9 10:39:59.246: INFO: Pod "busybox-user-65534-cbba1aa9-3140-4726-be71-8387ae705c6e" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:39:59.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3443" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":129,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  9 10:37:02.328: INFO: Creating ReplicaSet my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3
Jul  9 10:37:02.430: INFO: Pod name my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3: Found 1 pods out of 1
Jul  9 10:37:02.430: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3" is running
Jul  9 10:37:06.533: INFO: Pod "my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3-xctzg" is running (conditions: [{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-09 10:37:02 +0000 UTC Reason: Message:}])
Jul  9 10:37:06.534: INFO: Trying to dial the pod
Jul  9 10:37:41.687: INFO: Controller my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3: Failed to GET from replica 1 [my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3-xctzg]: the server is currently unable to handle the request (get pods my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3-xctzg)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423822, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:38:16.694: INFO: Controller my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3: Failed to GET from replica 1 [my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3-xctzg]: the server is currently unable to handle the request (get pods my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3-xctzg)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423822, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:38:51.686: INFO: Controller my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3: Failed to GET from replica 1 [my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3-xctzg]: the server is currently unable to handle the request (get pods my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3-xctzg)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423822, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:39:26.686: INFO: Controller my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3: Failed to GET from replica 1 [my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3-xctzg]: the server is currently unable to handle the request (get pods my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3-xctzg)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423822, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:39:56.839: INFO: Controller my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3: Failed to GET from replica 1 [my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3-xctzg]: the server is currently unable to handle the request (get pods my-hostname-basic-89c9ab75-32ab-4d93-9d42-29f83eefb7e3-xctzg)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423822, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:39:56.839: FAIL: Did not get expected responses within the timeout period of 120.00 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func8.1()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:110 +0x57
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00065e000)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 257 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  9 10:39:56.839: Did not get expected responses within the timeout period of 120.00 seconds.

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:110
------------------------------
{"msg":"FAILED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":6,"skipped":78,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:39:59.494: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 80 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":46,"skipped":284,"failed":2,"failures":["[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:39:40.374: INFO: >>> kubeConfig: /root/.kube/config
... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should verify that all csinodes have volume limits
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumelimits.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits","total":-1,"completed":47,"skipped":284,"failed":2,"failures":["[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:39:59.576: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 154 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:183
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":29,"skipped":232,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:00.184: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1155
STEP: Create the cronjob
STEP: Wait for the CronJob to create new Job
STEP: Delete the cronjob
STEP: Verify if cronjob does not leave jobs nor pods behind
STEP: Gathering metrics
Jul  9 10:40:00.763: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:00.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-991" for this suite.


... skipping 27 lines ...
• [SLOW TEST:28.824 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":26,"skipped":157,"failed":2,"failures":["[sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 45 lines ...
Jul  9 10:39:23.100: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6wx4f] to have phase Bound
Jul  9 10:39:23.152: INFO: PersistentVolumeClaim pvc-6wx4f found and phase=Bound (51.010344ms)
STEP: Deleting the previously created pod
Jul  9 10:39:29.411: INFO: Deleting pod "pvc-volume-tester-sclgk" in namespace "csi-mock-volumes-2071"
Jul  9 10:39:29.464: INFO: Wait up to 5m0s for pod "pvc-volume-tester-sclgk" to be fully deleted
STEP: Checking CSI driver logs
Jul  9 10:39:33.625: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7c600858-d084-4756-9373-581543cdd80c/volumes/kubernetes.io~csi/pvc-49245ddf-eacc-4b4d-9c19-b1ceadec1b22/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-sclgk
Jul  9 10:39:33.625: INFO: Deleting pod "pvc-volume-tester-sclgk" in namespace "csi-mock-volumes-2071"
STEP: Deleting claim pvc-6wx4f
Jul  9 10:39:33.780: INFO: Waiting up to 2m0s for PersistentVolume pvc-49245ddf-eacc-4b4d-9c19-b1ceadec1b22 to get deleted
Jul  9 10:39:33.831: INFO: PersistentVolume pvc-49245ddf-eacc-4b4d-9c19-b1ceadec1b22 found and phase=Released (50.914903ms)
Jul  9 10:39:35.883: INFO: PersistentVolume pvc-49245ddf-eacc-4b4d-9c19-b1ceadec1b22 found and phase=Released (2.102809092s)
... skipping 47 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should not be passed when CSIDriver does not exist
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":31,"skipped":300,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:01.953: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 104 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      running a failing command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:517
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","total":-1,"completed":42,"skipped":276,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:10.619: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 93 lines ...
• [SLOW TEST:11.391 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":159,"failed":2,"failures":["[sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:12.851: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219

      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":52,"skipped":342,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:39:10.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 43 lines ...
• [SLOW TEST:62.577 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237
------------------------------
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":53,"skipped":342,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:12.906: INFO: Only supported for providers [gce gke] (not aws)
... skipping 41 lines ...
  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:115
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:12:06.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 87 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":2,"skipped":0,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:14.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3185" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 5 lines ...
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
Jul  9 10:40:16.559: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Jul  9 10:40:16.559: INFO: Deleting pod "simpletest-rc-to-be-deleted-24mdm" in namespace "gc-473"
Jul  9 10:40:16.615: INFO: Deleting pod "simpletest-rc-to-be-deleted-5lvbx" in namespace "gc-473"
Jul  9 10:40:16.673: INFO: Deleting pod "simpletest-rc-to-be-deleted-7bvj4" in namespace "gc-473"
Jul  9 10:40:16.727: INFO: Deleting pod "simpletest-rc-to-be-deleted-8lz5q" in namespace "gc-473"
Jul  9 10:40:16.780: INFO: Deleting pod "simpletest-rc-to-be-deleted-dcdvp" in namespace "gc-473"
[AfterEach] [sig-api-machinery] Garbage collector
... skipping 5 lines ...
• [SLOW TEST:311.537 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":6,"skipped":22,"failed":1,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
Jul  9 10:39:40.108: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  9 10:39:42.057: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  9 10:39:42.108: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  9 10:39:44.057: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  9 10:39:44.108: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
Jul  9 10:40:14.110: FAIL: Timed out after 30.001s.
Expected
    <*errors.errorString | 0xc0044921f0>: {
        s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"2021/07/09 10:39:22 Started HTTP server on port 8080\\n2021/07/09 10:39:22 Started UDP server on port  8081\\n\"",
    }
to be nil

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/node.glob..func11.1.2(0xc003a77400)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79 +0x342
... skipping 17 lines ...
Jul  9 10:40:14.162: INFO: At 2021-07-09 10:39:22 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-48-135.us-west-1.compute.internal} Started: Started container agnhost-container
Jul  9 10:40:14.162: INFO: At 2021-07-09 10:39:23 +0000 UTC - event for pod-with-prestop-exec-hook: {default-scheduler } Scheduled: Successfully assigned container-lifecycle-hook-7399/pod-with-prestop-exec-hook to ip-172-20-55-238.us-west-1.compute.internal
Jul  9 10:40:14.162: INFO: At 2021-07-09 10:39:24 +0000 UTC - event for pod-with-prestop-exec-hook: {kubelet ip-172-20-55-238.us-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul  9 10:40:14.162: INFO: At 2021-07-09 10:39:24 +0000 UTC - event for pod-with-prestop-exec-hook: {kubelet ip-172-20-55-238.us-west-1.compute.internal} Created: Created container pod-with-prestop-exec-hook
Jul  9 10:40:14.162: INFO: At 2021-07-09 10:39:24 +0000 UTC - event for pod-with-prestop-exec-hook: {kubelet ip-172-20-55-238.us-west-1.compute.internal} Started: Started container pod-with-prestop-exec-hook
Jul  9 10:40:14.162: INFO: At 2021-07-09 10:39:26 +0000 UTC - event for pod-with-prestop-exec-hook: {kubelet ip-172-20-55-238.us-west-1.compute.internal} Killing: Stopping container pod-with-prestop-exec-hook
Jul  9 10:40:14.162: INFO: At 2021-07-09 10:39:41 +0000 UTC - event for pod-with-prestop-exec-hook: {kubelet ip-172-20-55-238.us-west-1.compute.internal} FailedPreStopHook: Exec lifecycle hook ([sh -c curl http://100.96.2.50:8080/echo?msg=prestop]) for Container "pod-with-prestop-exec-hook" in Pod "pod-with-prestop-exec-hook_container-lifecycle-hook-7399(6184acaa-2814-46d1-bca9-d98cee129b2f)" failed - error: command 'sh -c curl http://100.96.2.50:8080/echo?msg=prestop' exited with 137:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0, message: "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0"
Jul  9 10:40:14.212: INFO: POD                      NODE                                         PHASE    GRACE  CONDITIONS
Jul  9 10:40:14.212: INFO: pod-handle-http-request  ip-172-20-48-135.us-west-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:39:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:39:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:39:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:39:21 +0000 UTC  }]
Jul  9 10:40:14.212: INFO: 
Jul  9 10:40:14.264: INFO: 
... skipping 256 lines ...
    should execute prestop exec hook properly [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul  9 10:40:14.110: Timed out after 30.001s.
    Expected
        <*errors.errorString | 0xc0044921f0>: {
            s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"2021/07/09 10:39:22 Started HTTP server on port 8080\\n2021/07/09 10:39:22 Started UDP server on port  8081\\n\"",
        }
    to be nil

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79
------------------------------
{"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":351,"failed":5,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

SS
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":133,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}
[BeforeEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:40:02.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ssh
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
Jul  9 10:40:10.220: INFO: Got stdout from 54.183.115.87:22: Hello from ubuntu@ip-172-20-55-238
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Jul  9 10:40:12.285: INFO: Got stdout from 52.53.244.161:22: stdout
Jul  9 10:40:12.285: INFO: Got stderr from 52.53.244.161:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing ubuntu@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:17.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-7536" for this suite.


... skipping 16 lines ...
Jul  9 10:35:14.675: INFO: Creating resource for dynamic PV
Jul  9 10:35:14.675: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass fsgroupchangepolicy-692522vzh
STEP: creating a claim
Jul  9 10:35:14.725: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating Pod in namespace fsgroupchangepolicy-6925 with fsgroup 1000
Jul  9 10:40:15.034: FAIL: Unexpected error:
    <*errors.errorString | 0xc003c3d170>: {
        s: "pod \"pod-1d147aaa-c423-43b4-92b6-1843c38418b8\" is not Running: timed out waiting for the condition",
    }
    pod "pod-1d147aaa-c423-43b4-92b6-1843c38418b8" is not Running: timed out waiting for the condition
occurred

... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "fsgroupchangepolicy-6925".
STEP: Found 5 events.
Jul  9 10:40:15.234: INFO: At 2021-07-09 10:35:14 +0000 UTC - event for aws848h2: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  9 10:40:15.234: INFO: At 2021-07-09 10:35:14 +0000 UTC - event for aws848h2: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  9 10:40:15.234: INFO: At 2021-07-09 10:35:14 +0000 UTC - event for aws848h2: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-kggbl_3f1afca3-5d25-4429-9cbc-963947b5dbaa } Provisioning: External provisioner is provisioning volume for claim "fsgroupchangepolicy-6925/aws848h2"
Jul  9 10:40:15.234: INFO: At 2021-07-09 10:35:24 +0000 UTC - event for aws848h2: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-kggbl_3f1afca3-5d25-4429-9cbc-963947b5dbaa } ProvisioningFailed: failed to provision volume with StorageClass "fsgroupchangepolicy-692522vzh": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  9 10:40:15.234: INFO: At 2021-07-09 10:35:35 +0000 UTC - event for aws848h2: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-kggbl_3f1afca3-5d25-4429-9cbc-963947b5dbaa } ProvisioningFailed: failed to provision volume with StorageClass "fsgroupchangepolicy-692522vzh": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  9 10:40:15.283: INFO: POD                                       NODE  PHASE    GRACE  CONDITIONS
Jul  9 10:40:15.283: INFO: pod-1d147aaa-c423-43b4-92b6-1843c38418b8        Pending         []
Jul  9 10:40:15.283: INFO: 
Jul  9 10:40:15.334: INFO: 
Logging node info for node ip-172-20-35-137.us-west-1.compute.internal
Jul  9 10:40:15.383: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-35-137.us-west-1.compute.internal    6fabd899-928d-426a-90dc-736e6c74bfd4 42617 0 2021-07-09 10:06:49 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-west-1 failure-domain.beta.kubernetes.io/zone:us-west-1a kops.k8s.io/instancegroup:master-us-west-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-35-137.us-west-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:us-west-1a topology.kubernetes.io/region:us-west-1 topology.kubernetes.io/zone:us-west-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09592cf4d4a52bd7d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{protokube Update v1 2021-07-09 10:06:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2021-07-09 10:07:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {kubelet Update v1 2021-07-09 10:07:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} } {aws-cloud-controller-manager Update v1 2021-07-09 10:07:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2021-07-09 10:07:41 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-09 10:07:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}} } {kubelet Update v1 2021-07-09 10:07:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///us-west-1a/i-09592cf4d4a52bd7d,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3904507904 0} {<nil>} 3812996Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3799650304 0} {<nil>} 3710596Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-09 10:38:03 +0000 UTC,LastTransitionTime:2021-07-09 10:06:45 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-09 10:38:03 +0000 UTC,LastTransitionTime:2021-07-09 10:06:45 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-09 10:38:03 +0000 UTC,LastTransitionTime:2021-07-09 10:06:45 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-09 10:38:03 +0000 UTC,LastTransitionTime:2021-07-09 10:07:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.35.137,},NodeAddress{Type:ExternalIP,Address:52.53.244.161,},NodeAddress{Type:InternalDNS,Address:ip-172-20-35-137.us-west-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-35-137.us-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-53-244-161.us-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2aed4241485ec9e03a823404b0123e,SystemUUID:ec2aed42-4148-5ec9-e03a-823404b0123e,BootID:ab603ea7-3ec5-445d-8bcf-c0577e582b2b,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.1,KubeProxyVersion:v1.22.0-beta.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:172004323,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.22.0-beta.1],SizeBytes:129673683,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.22.0-beta.1],SizeBytes:123201847,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.1],SizeBytes:113890828,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.1],SizeBytes:112365069,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.1],SizeBytes:105483977,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.22.0-beta.1],SizeBytes:53943096,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.1],SizeBytes:25632279,},ContainerImage{Names:[gcr.io/k8s-staging-provider-aws/cloud-controller-manager@sha256:6e0084ecedc8d6d2b0f5cb984c4fe6c860c8d7283c173145b0eaeaaff35ba98a gcr.io/k8s-staging-provider-aws/cloud-controller-manager:latest],SizeBytes:16211866,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 251 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Jul  9 10:40:15.034: Unexpected error:
          <*errors.errorString | 0xc003c3d170>: {
              s: "pod \"pod-1d147aaa-c423-43b4-92b6-1843c38418b8\" is not Running: timed out waiting for the condition",
          }
          pod "pod-1d147aaa-c423-43b4-92b6-1843c38418b8" is not Running: timed out waiting for the condition
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:250
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":24,"skipped":207,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:17.584: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:17.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4772" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":25,"skipped":209,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:18.015: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:18.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7821" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":26,"skipped":230,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:19.070: INFO: Only supported for providers [vsphere] (not aws)
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:19.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2556" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":34,"skipped":353,"failed":5,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:19.843: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 104 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:23.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-privileged-pod-1748" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":35,"skipped":375,"failed":5,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:23.747: INFO: Only supported for providers [vsphere] (not aws)
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":28,"skipped":170,"failed":2,"failures":["[sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
... skipping 58 lines ...
Jul  9 10:40:12.106: INFO: Pod aws-client still exists
Jul  9 10:40:14.054: INFO: Waiting for pod aws-client to disappear
Jul  9 10:40:14.105: INFO: Pod aws-client still exists
Jul  9 10:40:16.054: INFO: Waiting for pod aws-client to disappear
Jul  9 10:40:16.105: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Jul  9 10:40:16.478: INFO: Couldn't delete PD "aws://us-west-1a/vol-02125e0e1eaa62311", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-02125e0e1eaa62311 is currently attached to i-04338b50ceb21cc5f
	status code: 400, request id: 76fe1bc1-2d6e-486a-a711-869ea9fe898c
Jul  9 10:40:21.793: INFO: Couldn't delete PD "aws://us-west-1a/vol-02125e0e1eaa62311", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-02125e0e1eaa62311 is currently attached to i-04338b50ceb21cc5f
	status code: 400, request id: cc135f33-9450-455d-ab4d-4f283c2eb4ab
Jul  9 10:40:27.161: INFO: Successfully deleted PD "aws://us-west-1a/vol-02125e0e1eaa62311".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:27.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8295" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":34,"skipped":390,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:27.299: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 59 lines ...
Jul  9 10:37:49.554: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2855 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.69.29.93:80 2>&1 || true; echo; done'
Jul  9 10:40:20.321: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.69.29.93:80\n+ true\n+ echo\n"
Jul  9 10:40:20.321: INFO: stdout: "wget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\n"
Jul  9 10:40:20.321: INFO: Unable to reach the following endpoints of service 100.69.29.93: map[service-headless-toggled-kh6fq:{} service-headless-toggled-nz6hn:{} service-headless-toggled-zlr9w:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2855
STEP: Deleting pod verify-service-up-exec-pod-452gx in namespace services-2855
Jul  9 10:40:25.441: FAIL: Unexpected error:
    <*errors.errorString | 0xc0026d00a0>: {
        s: "service verification failed for: 100.69.29.93\nexpected [service-headless-toggled-kh6fq service-headless-toggled-nz6hn service-headless-toggled-zlr9w]\nreceived [wget: download timed out]",
    }
    service verification failed for: 100.69.29.93
    expected [service-headless-toggled-kh6fq service-headless-toggled-nz6hn service-headless-toggled-zlr9w]
    received [wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.29()
... skipping 353 lines ...
• Failure [336.660 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1937

  Jul  9 10:40:25.442: Unexpected error:
      <*errors.errorString | 0xc0026d00a0>: {
          s: "service verification failed for: 100.69.29.93\nexpected [service-headless-toggled-kh6fq service-headless-toggled-nz6hn service-headless-toggled-zlr9w]\nreceived [wget: download timed out]",
      }
      service verification failed for: 100.69.29.93
      expected [service-headless-toggled-kh6fq service-headless-toggled-nz6hn service-headless-toggled-zlr9w]
      received [wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1962
------------------------------
{"msg":"FAILED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":36,"skipped":218,"failed":2,"failures":["[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should implement service.kubernetes.io/headless"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:28.333: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 67 lines ...
&Pod{ObjectMeta:{webserver-deployment-795d758f88-7p7jf webserver-deployment-795d758f88- deployment-9431  faaf5630-dbc3-4e82-ae96-352fa4c5388c 45744 0 2021-07-09 10:40:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ee9bfd8a-9e3a-4608-ad55-5d46f7625336 0xc001bb0347 0xc001bb0348}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee9bfd8a-9e3a-4608-ad55-5d46f7625336\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-07-09 10:40:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gxlzc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gxlzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-54-0.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.54.0,PodIP:,StartTime:2021-07-09 10:40:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.425: INFO: Pod "webserver-deployment-795d758f88-bwj6s" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-bwj6s webserver-deployment-795d758f88- deployment-9431  7b9aa463-328f-43c1-bc0c-dc408ccf8761 45778 0 2021-07-09 10:40:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ee9bfd8a-9e3a-4608-ad55-5d46f7625336 0xc001bb0910 0xc001bb0911}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee9bfd8a-9e3a-4608-ad55-5d46f7625336\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-07-09 10:40:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wrjfr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrjfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-42-78.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.42.78,PodIP:,StartTime:2021-07-09 10:40:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.425: INFO: Pod "webserver-deployment-795d758f88-dt25w" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-dt25w webserver-deployment-795d758f88- deployment-9431  e3cdc826-f196-42ca-8793-872fcc79b4c7 45640 0 2021-07-09 10:40:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ee9bfd8a-9e3a-4608-ad55-5d46f7625336 0xc001bb0b97 0xc001bb0b98}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee9bfd8a-9e3a-4608-ad55-5d46f7625336\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qcbxg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qcbxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-55-238.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.425: INFO: Pod "webserver-deployment-795d758f88-fh4wd" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-fh4wd webserver-deployment-795d758f88- deployment-9431  eca27df2-bf08-4cba-b1bc-9cdc9f31e9ae 45715 0 2021-07-09 10:40:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ee9bfd8a-9e3a-4608-ad55-5d46f7625336 0xc001bb0d30 0xc001bb0d31}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee9bfd8a-9e3a-4608-ad55-5d46f7625336\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-07-09 10:40:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.55\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dds9p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dds9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-54-0.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.54.0,PodIP:100.96.4.55,StartTime:2021-07-09 10:40:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.4.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.425: INFO: Pod "webserver-deployment-795d758f88-fxx49" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-fxx49 webserver-deployment-795d758f88- deployment-9431  664e227c-544c-4238-a087-721e34538d2d 45712 0 2021-07-09 10:40:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ee9bfd8a-9e3a-4608-ad55-5d46f7625336 0xc001bb0f30 0xc001bb0f31}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee9bfd8a-9e3a-4608-ad55-5d46f7625336\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-07-09 10:40:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.253\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2jb9s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2jb9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-42-78.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.42.78,PodIP:100.96.1.253,StartTime:2021-07-09 10:40:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.253,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.425: INFO: Pod "webserver-deployment-795d758f88-mccf2" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-mccf2 webserver-deployment-795d758f88- deployment-9431  062326fa-cf93-467c-8248-f0610dfed2b4 45734 0 2021-07-09 10:40:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ee9bfd8a-9e3a-4608-ad55-5d46f7625336 0xc001bb1137 0xc001bb1138}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee9bfd8a-9e3a-4608-ad55-5d46f7625336\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vlpv4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vlpv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-48-135.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.426: INFO: Pod "webserver-deployment-795d758f88-mvlgn" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-mvlgn webserver-deployment-795d758f88- deployment-9431  8fe01339-116d-4a7a-82e2-d6c8d02ddcb8 45777 0 2021-07-09 10:40:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ee9bfd8a-9e3a-4608-ad55-5d46f7625336 0xc001bb12a0 0xc001bb12a1}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee9bfd8a-9e3a-4608-ad55-5d46f7625336\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-d2fjm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d2fjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-55-238.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.426: INFO: Pod "webserver-deployment-795d758f88-qc854" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-qc854 webserver-deployment-795d758f88- deployment-9431  2d836d6f-63ec-4d8c-8e4c-0f83425a2055 45754 0 2021-07-09 10:40:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ee9bfd8a-9e3a-4608-ad55-5d46f7625336 0xc001bb1400 0xc001bb1401}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee9bfd8a-9e3a-4608-ad55-5d46f7625336\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9fccv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9fccv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-55-238.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.426: INFO: Pod "webserver-deployment-795d758f88-skfkz" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-skfkz webserver-deployment-795d758f88- deployment-9431  095378ee-1f5b-48bc-a97d-43d1deb5fd9f 45768 0 2021-07-09 10:40:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ee9bfd8a-9e3a-4608-ad55-5d46f7625336 0xc001bb1560 0xc001bb1561}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee9bfd8a-9e3a-4608-ad55-5d46f7625336\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-07-09 10:40:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fxzvb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxzvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-54-0.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.54.0,PodIP:,StartTime:2021-07-09 10:40:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.426: INFO: Pod "webserver-deployment-795d758f88-wtphh" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-wtphh webserver-deployment-795d758f88- deployment-9431  5139fa76-c486-4c5e-a698-594ea06482ea 45810 0 2021-07-09 10:40:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ee9bfd8a-9e3a-4608-ad55-5d46f7625336 0xc001bb1730 0xc001bb1731}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ee9bfd8a-9e3a-4608-ad55-5d46f7625336\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-07-09 10:40:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.2.66\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fbmp2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbmp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-48-135.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.48.135,PodIP:100.96.2.66,StartTime:2021-07-09 10:40:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.2.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.427: INFO: Pod "webserver-deployment-847dcfb7fb-28s9l" is available:
&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-28s9l webserver-deployment-847dcfb7fb- deployment-9431  b8506558-9e1b-4b96-acda-541be367fd66 45336 0 2021-07-09 10:40:14 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 84ba1281-2e67-4264-ba44-a3ca8a7829e6 0xc001bb1930 0xc001bb1931}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"84ba1281-2e67-4264-ba44-a3ca8a7829e6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-07-09 10:40:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.53\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kzzjc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzzjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-54-0.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.54.0,PodIP:100.96.4.53,StartTime:2021-07-09 10:40:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-07-09 10:40:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://d18651f70856233e46f40b041591e44fe9654c610cf1f2c7cdd5dcae0e94db51,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.4.53,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.427: INFO: Pod "webserver-deployment-847dcfb7fb-5gcb5" is available:
&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5gcb5 webserver-deployment-847dcfb7fb- deployment-9431  14f5344e-4b95-4e73-b833-1121cb738443 45327 0 2021-07-09 10:40:14 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 84ba1281-2e67-4264-ba44-a3ca8a7829e6 0xc001bb1b10 0xc001bb1b11}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"84ba1281-2e67-4264-ba44-a3ca8a7829e6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-07-09 10:40:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.252\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tw2wz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tw2wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-42-78.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.42.78,PodIP:100.96.1.252,StartTime:2021-07-09 10:40:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-07-09 10:40:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://3abc5e312d03c2f3c5e69f29a4cad20690b4083f5b0fd1cc287f05ffd96cdecb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.252,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  9 10:40:28.427: INFO: Pod "webserver-deployment-847dcfb7fb-5gd2b" is available:
&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5gd2b webserver-deployment-847dcfb7fb- deployment-9431  a6da4598-0ce8-482e-8d8c-5ed42a3db7d5 45560 0 2021-07-09 10:40:14 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 84ba1281-2e67-4264-ba44-a3ca8a7829e6 0xc001bb1ce7 0xc001bb1ce8}] []  [{kube-controller-manager Update v1 2021-07-09 10:40:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"84ba1281-2e67-4264-ba44-a3ca8a7829e6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-07-09 10:40:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.3.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w9zfl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9zfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-55-238.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-07-09 10:40:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.55.238,PodIP:100.96.3.82,StartTime:2021-07-09 10:40:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-07-09 10:40:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://157a2f96e2782303f0f5c8131c0d47d96925b8173ac9ea8a1e91e317c1514228,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.3.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 40 lines ...
• [SLOW TEST:14.016 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:28.568: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 91 lines ...
• [SLOW TEST:9.071 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":36,"skipped":387,"failed":5,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:33.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7442" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":37,"skipped":396,"failed":5,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:34.087: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 114 lines ...
• [SLOW TEST:11.044 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:40:39.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:40.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9785" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":6,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:23.687 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:52
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":7,"skipped":28,"failed":1,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:40.688: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 51 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":37,"skipped":223,"failed":2,"failures":["[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should implement service.kubernetes.io/headless"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:40:30.011: INFO: >>> kubeConfig: /root/.kube/config
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":38,"skipped":223,"failed":2,"failures":["[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should implement service.kubernetes.io/headless"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:47.381: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-2b971afb-9591-4980-ac30-3959c01d35eb
STEP: Creating a pod to test consume secrets
Jul  9 10:40:41.114: INFO: Waiting up to 5m0s for pod "pod-secrets-4f3eb6a8-6e49-4e26-b17c-592fdea7aaca" in namespace "secrets-9237" to be "Succeeded or Failed"
Jul  9 10:40:41.165: INFO: Pod "pod-secrets-4f3eb6a8-6e49-4e26-b17c-592fdea7aaca": Phase="Pending", Reason="", readiness=false. Elapsed: 50.809573ms
Jul  9 10:40:43.216: INFO: Pod "pod-secrets-4f3eb6a8-6e49-4e26-b17c-592fdea7aaca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102544464s
Jul  9 10:40:45.273: INFO: Pod "pod-secrets-4f3eb6a8-6e49-4e26-b17c-592fdea7aaca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158667519s
Jul  9 10:40:47.324: INFO: Pod "pod-secrets-4f3eb6a8-6e49-4e26-b17c-592fdea7aaca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.2096906s
STEP: Saw pod success
Jul  9 10:40:47.324: INFO: Pod "pod-secrets-4f3eb6a8-6e49-4e26-b17c-592fdea7aaca" satisfied condition "Succeeded or Failed"
Jul  9 10:40:47.374: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod pod-secrets-4f3eb6a8-6e49-4e26-b17c-592fdea7aaca container secret-volume-test: <nil>
STEP: delete the pod
Jul  9 10:40:47.482: INFO: Waiting for pod pod-secrets-4f3eb6a8-6e49-4e26-b17c-592fdea7aaca to disappear
Jul  9 10:40:47.533: INFO: Pod pod-secrets-4f3eb6a8-6e49-4e26-b17c-592fdea7aaca no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.883 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":43,"failed":1,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:47.647: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 179 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":43,"skipped":331,"failed":1,"failures":["[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:48.016: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:48.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-97" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":44,"skipped":335,"failed":1,"failures":["[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  9 10:40:48.718: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 328 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:40:49.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-384" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":39,"skipped":228,"failed":2,"failures":["[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should implement service.kubernetes.io/headless"]}
Jul  9 10:40:49.430: INFO: Running AfterSuite actions on all nodes
Jul  9 10:40:49.430: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:40:49.430: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:40:49.430: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:40:49.430: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:40:49.430: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 16 lines ...
Jul  9 10:39:41.799: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1454c5497
STEP: creating a claim
Jul  9 10:39:41.851: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-xz84
STEP: Creating a pod to test subpath
Jul  9 10:39:42.022: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xz84" in namespace "provisioning-1454" to be "Succeeded or Failed"
Jul  9 10:39:42.073: INFO: Pod "pod-subpath-test-dynamicpv-xz84": Phase="Pending", Reason="", readiness=false. Elapsed: 50.680091ms
Jul  9 10:39:44.124: INFO: Pod "pod-subpath-test-dynamicpv-xz84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101390178s
Jul  9 10:39:46.175: INFO: Pod "pod-subpath-test-dynamicpv-xz84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152513591s
Jul  9 10:39:48.227: INFO: Pod "pod-subpath-test-dynamicpv-xz84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204275455s
Jul  9 10:39:50.279: INFO: Pod "pod-subpath-test-dynamicpv-xz84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.256407383s
Jul  9 10:39:52.330: INFO: Pod "pod-subpath-test-dynamicpv-xz84": Phase="Pending", Reason="", readiness=false. Elapsed: 10.307983373s
... skipping 18 lines ...
Jul  9 10:40:31.321: INFO: Pod "pod-subpath-test-dynamicpv-xz84": Phase="Pending", Reason="", readiness=false. Elapsed: 49.298733779s
Jul  9 10:40:33.372: INFO: Pod "pod-subpath-test-dynamicpv-xz84": Phase="Pending", Reason="", readiness=false. Elapsed: 51.350042192s
Jul  9 10:40:35.424: INFO: Pod "pod-subpath-test-dynamicpv-xz84": Phase="Pending", Reason="", readiness=false. Elapsed: 53.40186622s
Jul  9 10:40:37.476: INFO: Pod "pod-subpath-test-dynamicpv-xz84": Phase="Pending", Reason="", readiness=false. Elapsed: 55.453734598s
Jul  9 10:40:39.528: INFO: Pod "pod-subpath-test-dynamicpv-xz84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 57.505710987s
STEP: Saw pod success
Jul  9 10:40:39.528: INFO: Pod "pod-subpath-test-dynamicpv-xz84" satisfied condition "Succeeded or Failed"
Jul  9 10:40:39.579: INFO: Trying to get logs from node ip-172-20-48-135.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-xz84 container test-container-subpath-dynamicpv-xz84: <nil>
STEP: delete the pod
Jul  9 10:40:39.685: INFO: Waiting for pod pod-subpath-test-dynamicpv-xz84 to disappear
Jul  9 10:40:39.736: INFO: Pod pod-subpath-test-dynamicpv-xz84 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-xz84
Jul  9 10:40:39.736: INFO: Deleting pod "pod-subpath-test-dynamicpv-xz84" in namespace "provisioning-1454"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":83,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
Jul  9 10:40:55.363: INFO: Running AfterSuite actions on all nodes
Jul  9 10:40:55.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:40:55.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:40:55.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:40:55.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:40:55.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  9 10:40:55.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  9 10:40:55.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":18,"skipped":133,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:40:17.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":19,"skipped":133,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}
Jul  9 10:40:55.990: INFO: Running AfterSuite actions on all nodes
Jul  9 10:40:55.990: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:40:55.990: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:40:55.990: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:40:55.990: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:40:55.990: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 22 lines ...
• [SLOW TEST:42.802 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":27,"skipped":233,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents"]}
Jul  9 10:41:01.894: INFO: Running AfterSuite actions on all nodes
Jul  9 10:41:01.894: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:41:01.894: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:41:01.894: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:41:01.894: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:41:01.894: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 214 lines ...
Jul  9 10:40:07.966: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-z7kfp] to have phase Bound
Jul  9 10:40:08.018: INFO: PersistentVolumeClaim pvc-z7kfp found and phase=Bound (51.163777ms)
STEP: Deleting the previously created pod
Jul  9 10:40:36.283: INFO: Deleting pod "pvc-volume-tester-vmskl" in namespace "csi-mock-volumes-8838"
Jul  9 10:40:36.338: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vmskl" to be fully deleted
STEP: Checking CSI driver logs
Jul  9 10:40:40.495: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/0cc60b21-1d09-4174-8de2-17979e41f0d3/volumes/kubernetes.io~csi/pvc-c710312a-ff97-4fde-b4b7-34d551371564/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-vmskl
Jul  9 10:40:40.495: INFO: Deleting pod "pvc-volume-tester-vmskl" in namespace "csi-mock-volumes-8838"
STEP: Deleting claim pvc-z7kfp
Jul  9 10:40:40.649: INFO: Waiting up to 2m0s for PersistentVolume pvc-c710312a-ff97-4fde-b4b7-34d551371564 to get deleted
Jul  9 10:40:40.701: INFO: PersistentVolume pvc-c710312a-ff97-4fde-b4b7-34d551371564 found and phase=Released (52.275493ms)
Jul  9 10:40:42.753: INFO: PersistentVolume pvc-c710312a-ff97-4fde-b4b7-34d551371564 found and phase=Released (2.104135669s)
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497
    token should not be plumbed down when csiServiceAccountTokenEnabled=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":30,"skipped":237,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
Jul  9 10:41:08.929: INFO: Running AfterSuite actions on all nodes
Jul  9 10:41:08.929: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:41:08.929: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:41:08.929: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:41:08.929: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:41:08.929: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 26 lines ...
• [SLOW TEST:72.338 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted startup probe fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:319
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":54,"skipped":349,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
Jul  9 10:41:25.270: INFO: Running AfterSuite actions on all nodes
Jul  9 10:41:25.270: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:41:25.270: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:41:25.270: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:41:25.270: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:41:25.270: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 23 lines ...
• [SLOW TEST:52.725 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete pods when suspended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:111
------------------------------
{"msg":"PASSED [sig-apps] Job should delete pods when suspended","total":-1,"completed":38,"skipped":409,"failed":5,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}
Jul  9 10:41:26.872: INFO: Running AfterSuite actions on all nodes
Jul  9 10:41:26.872: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:41:26.872: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:41:26.872: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:41:26.872: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:41:26.872: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 18 lines ...
I0709 10:38:26.295229   12485 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9915, replica count: 2
I0709 10:38:29.396192   12485 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jul  9 10:38:29.566: INFO: Creating new exec pod
Jul  9 10:38:31.723: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9915 exec execpodd6j4k -- /bin/sh -x -c nslookup nodeport-service.services-9915.svc.cluster.local'
Jul  9 10:38:47.359: INFO: rc: 1
Jul  9 10:38:47.359: INFO: ExternalName service "services-9915/execpodd6j4k" failed to resolve to IP
Jul  9 10:38:49.360: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9915 exec execpodd6j4k -- /bin/sh -x -c nslookup nodeport-service.services-9915.svc.cluster.local'
Jul  9 10:39:04.964: INFO: rc: 1
Jul  9 10:39:04.964: INFO: ExternalName service "services-9915/execpodd6j4k" failed to resolve to IP
Jul  9 10:39:05.360: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9915 exec execpodd6j4k -- /bin/sh -x -c nslookup nodeport-service.services-9915.svc.cluster.local'
Jul  9 10:39:21.031: INFO: rc: 1
Jul  9 10:39:21.031: INFO: ExternalName service "services-9915/execpodd6j4k" failed to resolve to IP
Jul  9 10:39:21.360: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9915 exec execpodd6j4k -- /bin/sh -x -c nslookup nodeport-service.services-9915.svc.cluster.local'
Jul  9 10:39:37.027: INFO: rc: 1
Jul  9 10:39:37.027: INFO: ExternalName service "services-9915/execpodd6j4k" failed to resolve to IP
Jul  9 10:39:37.359: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9915 exec execpodd6j4k -- /bin/sh -x -c nslookup nodeport-service.services-9915.svc.cluster.local'
Jul  9 10:39:52.994: INFO: rc: 1
Jul  9 10:39:52.995: INFO: ExternalName service "services-9915/execpodd6j4k" failed to resolve to IP
Jul  9 10:39:53.360: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9915 exec execpodd6j4k -- /bin/sh -x -c nslookup nodeport-service.services-9915.svc.cluster.local'
Jul  9 10:40:08.963: INFO: rc: 1
Jul  9 10:40:08.963: INFO: ExternalName service "services-9915/execpodd6j4k" failed to resolve to IP
Jul  9 10:40:09.360: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9915 exec execpodd6j4k -- /bin/sh -x -c nslookup nodeport-service.services-9915.svc.cluster.local'
Jul  9 10:40:24.982: INFO: rc: 1
Jul  9 10:40:24.982: INFO: ExternalName service "services-9915/execpodd6j4k" failed to resolve to IP
Jul  9 10:40:25.360: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9915 exec execpodd6j4k -- /bin/sh -x -c nslookup nodeport-service.services-9915.svc.cluster.local'
Jul  9 10:40:41.005: INFO: rc: 1
Jul  9 10:40:41.005: INFO: ExternalName service "services-9915/execpodd6j4k" failed to resolve to IP
Jul  9 10:40:41.360: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9915 exec execpodd6j4k -- /bin/sh -x -c nslookup nodeport-service.services-9915.svc.cluster.local'
Jul  9 10:40:56.983: INFO: rc: 1
Jul  9 10:40:56.983: INFO: ExternalName service "services-9915/execpodd6j4k" failed to resolve to IP
Jul  9 10:40:56.983: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9915 exec execpodd6j4k -- /bin/sh -x -c nslookup nodeport-service.services-9915.svc.cluster.local'
Jul  9 10:41:12.605: INFO: rc: 1
Jul  9 10:41:12.605: INFO: ExternalName service "services-9915/execpodd6j4k" failed to resolve to IP
Jul  9 10:41:12.606: FAIL: Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 239 lines ...
• Failure [182.093 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  9 10:41:12.606: Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1458
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":24,"skipped":247,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}
Jul  9 10:41:27.967: INFO: Running AfterSuite actions on all nodes
Jul  9 10:41:27.967: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:41:27.967: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:41:27.967: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:41:27.967: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:41:27.967: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 25 lines ...
• [SLOW TEST:40.820 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:185
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":9,"skipped":56,"failed":1,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}
Jul  9 10:41:28.562: INFO: Running AfterSuite actions on all nodes
Jul  9 10:41:28.562: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:41:28.562: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:41:28.562: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:41:28.562: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:41:28.562: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 27 lines ...
STEP: Listing all of the created validation webhooks
Jul  9 10:40:50.294: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:41:00.523: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:41:10.710: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:41:20.915: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:41:31.034: INFO: Waiting for webhook configuration to be ready...
Jul  9 10:41:31.034: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000240240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "webhook-9681".
STEP: Found 7 events.
Jul  9 10:41:31.088: INFO: At 2021-07-09 10:40:26 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-78988fc6cd to 1
Jul  9 10:41:31.088: INFO: At 2021-07-09 10:40:26 +0000 UTC - event for sample-webhook-deployment-78988fc6cd: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-78988fc6cd-v5vql
Jul  9 10:41:31.088: INFO: At 2021-07-09 10:40:26 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-v5vql: {default-scheduler } Scheduled: Successfully assigned webhook-9681/sample-webhook-deployment-78988fc6cd-v5vql to ip-172-20-48-135.us-west-1.compute.internal
Jul  9 10:41:31.088: INFO: At 2021-07-09 10:40:27 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-v5vql: {kubelet ip-172-20-48-135.us-west-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
Jul  9 10:41:31.088: INFO: At 2021-07-09 10:40:28 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-v5vql: {kubelet ip-172-20-48-135.us-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul  9 10:41:31.088: INFO: At 2021-07-09 10:40:28 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-v5vql: {kubelet ip-172-20-48-135.us-west-1.compute.internal} Created: Created container sample-webhook
Jul  9 10:41:31.088: INFO: At 2021-07-09 10:40:28 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-v5vql: {kubelet ip-172-20-48-135.us-west-1.compute.internal} Started: Started container sample-webhook
Jul  9 10:41:31.140: INFO: POD                                         NODE                                         PHASE    GRACE  CONDITIONS
Jul  9 10:41:31.140: INFO: sample-webhook-deployment-78988fc6cd-v5vql  ip-172-20-48-135.us-west-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:40:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:40:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:40:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:40:26 +0000 UTC  }]
Jul  9 10:41:31.140: INFO: 
... skipping 392 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  9 10:41:31.034: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000240240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:606
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":28,"skipped":171,"failed":3,"failures":["[sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}
Jul  9 10:41:35.997: INFO: Running AfterSuite actions on all nodes
Jul  9 10:41:35.997: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:41:35.997: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:41:35.997: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:41:35.997: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:41:35.997: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 72 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":7,"skipped":90,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}
Jul  9 10:41:40.933: INFO: Running AfterSuite actions on all nodes
Jul  9 10:41:40.933: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:41:40.933: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:41:40.933: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:41:40.933: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:41:40.933: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 14 lines ...
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-643cf54f-bc48-49cd-a35d-e062d6680558]
STEP: Verifying pods for RC slow-terminating-unready-pod
Jul  9 10:25:45.702: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Jul  9 10:26:17.961: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:26:50.123: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:27:22.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:27:54.119: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:28:26.121: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:28:58.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:29:30.116: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:30:02.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:30:34.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:31:06.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:31:38.116: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:32:10.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:32:42.116: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:33:14.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:33:46.122: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:34:18.116: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:34:50.116: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:35:22.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:35:54.116: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:36:26.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:36:58.116: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:37:30.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:38:02.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:38:34.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:39:06.120: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:39:38.116: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:40:10.115: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:40:42.120: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:41:14.116: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:41:46.117: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:42:16.272: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-n5dfv]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-n5dfv)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761423145, loc:(*time.Location)(0xa085940)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.238", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004092438), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00465a3e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc00435e56d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  9 10:42:16.272: FAIL: Unexpected error:
    <*errors.errorString | 0xc003450260>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.21()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1709 +0xb99
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000326900)
... skipping 12 lines ...
STEP: Found 7 events.
Jul  9 10:42:16.576: INFO: At 2021-07-09 10:25:45 +0000 UTC - event for slow-terminating-unready-pod: {replication-controller } SuccessfulCreate: Created pod: slow-terminating-unready-pod-n5dfv
Jul  9 10:42:16.576: INFO: At 2021-07-09 10:25:45 +0000 UTC - event for slow-terminating-unready-pod-n5dfv: {default-scheduler } Scheduled: Successfully assigned services-9557/slow-terminating-unready-pod-n5dfv to ip-172-20-55-238.us-west-1.compute.internal
Jul  9 10:42:16.576: INFO: At 2021-07-09 10:25:46 +0000 UTC - event for slow-terminating-unready-pod-n5dfv: {kubelet ip-172-20-55-238.us-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul  9 10:42:16.576: INFO: At 2021-07-09 10:25:46 +0000 UTC - event for slow-terminating-unready-pod-n5dfv: {kubelet ip-172-20-55-238.us-west-1.compute.internal} Created: Created container slow-terminating-unready-pod
Jul  9 10:42:16.576: INFO: At 2021-07-09 10:25:46 +0000 UTC - event for slow-terminating-unready-pod-n5dfv: {kubelet ip-172-20-55-238.us-west-1.compute.internal} Started: Started container slow-terminating-unready-pod
Jul  9 10:42:16.576: INFO: At 2021-07-09 10:25:46 +0000 UTC - event for slow-terminating-unready-pod-n5dfv: {kubelet ip-172-20-55-238.us-west-1.compute.internal} Unhealthy: Readiness probe failed: 
Jul  9 10:42:16.576: INFO: At 2021-07-09 10:42:16 +0000 UTC - event for slow-terminating-unready-pod: {replication-controller } SuccessfulDelete: Deleted pod: slow-terminating-unready-pod-n5dfv
Jul  9 10:42:16.627: INFO: POD                                 NODE                                         PHASE    GRACE  CONDITIONS
Jul  9 10:42:16.628: INFO: slow-terminating-unready-pod-n5dfv  ip-172-20-55-238.us-west-1.compute.internal  Running  600s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:25:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:25:45 +0000 UTC ContainersNotReady containers with unready status: [slow-terminating-unready-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:25:45 +0000 UTC ContainersNotReady containers with unready status: [slow-terminating-unready-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:25:45 +0000 UTC  }]
Jul  9 10:42:16.628: INFO: 
Jul  9 10:42:16.680: INFO: 
Logging node info for node ip-172-20-35-137.us-west-1.compute.internal
... skipping 189 lines ...
• Failure [993.566 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1645

  Jul  9 10:42:16.272: Unexpected error:
      <*errors.errorString | 0xc003450260>: {
          s: "failed to wait for pods responding: timed out waiting for the condition",
      }
      failed to wait for pods responding: timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1709
------------------------------
{"msg":"FAILED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":30,"skipped":266,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Services should create endpoints for unready pods"]}
Jul  9 10:42:18.860: INFO: Running AfterSuite actions on all nodes
Jul  9 10:42:18.860: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:42:18.860: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:42:18.860: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:42:18.860: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:42:18.860: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 6 lines ...
STEP: Creating a kubernetes client
Jul  9 10:37:23.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:738
STEP: creating a StorageClass
STEP: Creating a StorageClass
STEP: creating a claim object with a suffix for gluster dynamic provisioner
Jul  9 10:37:24.258: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  9 10:42:24.518: INFO: The test missed event about failed provisioning, but checked that no volume was provisioned for 5m0s
Jul  9 10:42:24.518: INFO: deleting claim "volume-provisioning-8635"/"pvc-jvsp7"
Jul  9 10:42:24.570: INFO: deleting storage class volume-provisioning-8635-invalid-aws9rdbg
[AfterEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:42:24.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-8635" for this suite.


• [SLOW TEST:300.824 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:737
    should report an error and create no PV
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:738
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":34,"skipped":289,"failed":2,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}
Jul  9 10:42:24.733: INFO: Running AfterSuite actions on all nodes
Jul  9 10:42:24.733: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:42:24.733: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:42:24.733: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:42:24.733: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:42:24.733: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 26 lines ...
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9610 to expose endpoints map[pod1:[100] pod2:[101]]
Jul  9 10:40:18.072: INFO: successfully validated that service multi-endpoint-test in namespace services-9610 exposes endpoints map[pod1:[100] pod2:[101]]
STEP: Checking if the Service forwards traffic to pods
Jul  9 10:40:18.072: INFO: Creating new exec pod
Jul  9 10:40:27.263: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:40:33.081: INFO: rc: 1
Jul  9 10:40:33.081: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:40:34.081: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:40:39.770: INFO: rc: 1
Jul  9 10:40:39.770: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:40:40.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:40:45.721: INFO: rc: 1
Jul  9 10:40:45.721: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:40:46.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:40:51.696: INFO: rc: 1
Jul  9 10:40:51.696: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:40:52.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:40:57.698: INFO: rc: 1
Jul  9 10:40:57.698: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:40:58.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:41:03.700: INFO: rc: 1
Jul  9 10:41:03.700: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:04.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:41:09.691: INFO: rc: 1
Jul  9 10:41:09.691: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:10.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:41:15.730: INFO: rc: 1
Jul  9 10:41:15.730: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:16.081: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:41:21.688: INFO: rc: 1
Jul  9 10:41:21.688: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:22.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:41:27.693: INFO: rc: 1
Jul  9 10:41:27.693: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:28.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:41:33.715: INFO: rc: 1
Jul  9 10:41:33.715: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:34.081: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:41:39.699: INFO: rc: 1
Jul  9 10:41:39.699: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:40.081: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:41:45.708: INFO: rc: 1
Jul  9 10:41:45.708: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:46.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:41:51.706: INFO: rc: 1
Jul  9 10:41:51.707: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:52.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:41:57.711: INFO: rc: 1
Jul  9 10:41:57.711: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:58.081: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:42:03.688: INFO: rc: 1
Jul  9 10:42:03.688: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:04.081: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:42:09.690: INFO: rc: 1
Jul  9 10:42:09.690: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:10.081: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:42:15.751: INFO: rc: 1
Jul  9 10:42:15.752: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:16.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:42:21.711: INFO: rc: 1
Jul  9 10:42:21.712: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:22.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:42:27.691: INFO: rc: 1
Jul  9 10:42:27.691: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:28.082: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:42:33.698: INFO: rc: 1
Jul  9 10:42:33.698: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:33.698: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  9 10:42:39.326: INFO: rc: 1
Jul  9 10:42:39.326: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9610 exec execpodrftnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:39.327: FAIL: Unexpected error:
    <*errors.errorString | 0xc003c4e5e0>: {
        s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol
occurred

... skipping 217 lines ...
• Failure [151.204 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  9 10:42:39.327: Unexpected error:
      <*errors.errorString | 0xc003c4e5e0>: {
          s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:913
------------------------------
{"msg":"FAILED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":42,"skipped":286,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}
Jul  9 10:42:41.880: INFO: Running AfterSuite actions on all nodes
Jul  9 10:42:41.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:42:41.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:42:41.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:42:41.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:42:41.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 17 lines ...
I0709 10:40:27.846416   12410 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1933, replica count: 2
I0709 10:40:30.947796   12410 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0709 10:40:33.948080   12410 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  9 10:40:33.948: INFO: Creating new exec pod
Jul  9 10:40:39.104: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:40:44.726: INFO: rc: 1
Jul  9 10:40:44.726: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + echonc hostName -v
 -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:40:45.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:40:51.344: INFO: rc: 1
Jul  9 10:40:51.344: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:40:51.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:40:57.392: INFO: rc: 1
Jul  9 10:40:57.392: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:40:57.726: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:41:03.352: INFO: rc: 1
Jul  9 10:41:03.352: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:03.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:41:09.376: INFO: rc: 1
Jul  9 10:41:09.376: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:09.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:41:15.351: INFO: rc: 1
Jul  9 10:41:15.352: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:15.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:41:21.370: INFO: rc: 1
Jul  9 10:41:21.370: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:21.726: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:41:27.347: INFO: rc: 1
Jul  9 10:41:27.347: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:27.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:41:33.359: INFO: rc: 1
Jul  9 10:41:33.359: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:33.726: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:41:39.337: INFO: rc: 1
Jul  9 10:41:39.337: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:39.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:41:45.349: INFO: rc: 1
Jul  9 10:41:45.349: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:45.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:41:51.349: INFO: rc: 1
Jul  9 10:41:51.349: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:51.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:41:57.421: INFO: rc: 1
Jul  9 10:41:57.421: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:41:57.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:42:03.380: INFO: rc: 1
Jul  9 10:42:03.380: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:03.726: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:42:09.342: INFO: rc: 1
Jul  9 10:42:09.342: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:09.726: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:42:15.358: INFO: rc: 1
Jul  9 10:42:15.358: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:15.726: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:42:21.364: INFO: rc: 1
Jul  9 10:42:21.364: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:21.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:42:27.363: INFO: rc: 1
Jul  9 10:42:27.363: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:27.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:42:33.355: INFO: rc: 1
Jul  9 10:42:33.355: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:33.726: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:42:39.375: INFO: rc: 1
Jul  9 10:42:39.375: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + nc -vecho -t hostName -w
 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:39.727: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:42:45.379: INFO: rc: 1
Jul  9 10:42:45.379: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:45.379: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  9 10:42:50.996: INFO: rc: 1
Jul  9 10:42:50.996: INFO: Service reachability failing with error: error running /tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1933 exec execpodgxb45 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  9 10:42:50.996: FAIL: Unexpected error:
    <*errors.errorString | 0xc0050703d0>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
occurred

... skipping 212 lines ...
• Failure [146.022 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  9 10:42:50.996: Unexpected error:
      <*errors.errorString | 0xc0050703d0>: {
          s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1333
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":34,"skipped":411,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}
Jul  9 10:42:53.403: INFO: Running AfterSuite actions on all nodes
Jul  9 10:42:53.403: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:42:53.403: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:42:53.403: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:42:53.403: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:42:53.403: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 107 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should not expand volume if resizingOnDriver=off, resizingOnSC=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":48,"skipped":286,"failed":2,"failures":["[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
Jul  9 10:42:56.704: INFO: Running AfterSuite actions on all nodes
Jul  9 10:42:56.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:42:56.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:42:56.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:42:56.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:42:56.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 165 lines ...
Jul  9 10:32:21.856: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1587 create -f -'
Jul  9 10:32:22.216: INFO: stderr: ""
Jul  9 10:32:22.216: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Jul  9 10:32:22.216: INFO: Waiting for all frontend pods to be Running.
Jul  9 10:32:27.317: INFO: Waiting for frontend to serve content.
Jul  9 10:32:57.370: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.226:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:33:32.425: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.3.235:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:34:07.479: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.3.235:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:34:42.533: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.3.235:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:35:17.587: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.195:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:35:52.640: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.3.235:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:36:27.696: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.226:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:37:02.750: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.195:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:37:37.804: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.226:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:38:12.857: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.195:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:38:47.910: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.195:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:39:22.964: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.3.235:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:39:58.017: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.226:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:40:33.069: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.3.235:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:41:08.122: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.226:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:41:43.175: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.195:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:42:18.228: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.1.195:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:42:53.280: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.226:80: i/o timeout"ServiceUnavailable0�"
Jul  9 10:42:58.281: FAIL: Frontend service did not start serving content in 600 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:375 +0x159
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000b33e00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 61 lines ...
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:32:22 +0000 UTC - event for frontend-685fc574d5-t24jz: {kubelet ip-172-20-54-0.us-west-1.compute.internal} Started: Started container guestbook-frontend
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:32:22 +0000 UTC - event for frontend-685fc574d5-t24jz: {kubelet ip-172-20-54-0.us-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:32:22 +0000 UTC - event for frontend-685fc574d5-wqtsz: {kubelet ip-172-20-55-238.us-west-1.compute.internal} Started: Started container guestbook-frontend
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:32:22 +0000 UTC - event for frontend-685fc574d5-wqtsz: {kubelet ip-172-20-55-238.us-west-1.compute.internal} Created: Created container guestbook-frontend
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:32:22 +0000 UTC - event for frontend-685fc574d5-xpbsn: {kubelet ip-172-20-42-78.us-west-1.compute.internal} Created: Created container guestbook-frontend
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:32:22 +0000 UTC - event for frontend-685fc574d5-xpbsn: {kubelet ip-172-20-42-78.us-west-1.compute.internal} Started: Started container guestbook-frontend
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:33:36 +0000 UTC - event for agnhost-replica-6bcf79b489-5hnbs: {kubelet ip-172-20-48-135.us-west-1.compute.internal} BackOff: Back-off restarting failed container
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:34:07 +0000 UTC - event for agnhost-replica-6bcf79b489-9rtw9: {kubelet ip-172-20-54-0.us-west-1.compute.internal} BackOff: Back-off restarting failed container
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:42:59 +0000 UTC - event for agnhost-primary-5db8ddd565-kh227: {kubelet ip-172-20-54-0.us-west-1.compute.internal} Killing: Stopping container primary
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:42:59 +0000 UTC - event for frontend-685fc574d5-t24jz: {kubelet ip-172-20-54-0.us-west-1.compute.internal} Killing: Stopping container guestbook-frontend
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:42:59 +0000 UTC - event for frontend-685fc574d5-wqtsz: {kubelet ip-172-20-55-238.us-west-1.compute.internal} Killing: Stopping container guestbook-frontend
Jul  9 10:43:00.075: INFO: At 2021-07-09 10:42:59 +0000 UTC - event for frontend-685fc574d5-xpbsn: {kubelet ip-172-20-42-78.us-west-1.compute.internal} Killing: Stopping container guestbook-frontend
Jul  9 10:43:00.127: INFO: POD                               NODE                                         PHASE    GRACE  CONDITIONS
Jul  9 10:43:00.127: INFO: agnhost-primary-5db8ddd565-kh227  ip-172-20-54-0.us-west-1.compute.internal    Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:32:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:43:00 +0000 UTC ContainersNotReady containers with unready status: [primary]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:43:00 +0000 UTC ContainersNotReady containers with unready status: [primary]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-09 10:32:21 +0000 UTC  }]
... skipping 181 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul  9 10:42:58.281: Frontend service did not start serving content in 600 seconds.

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:375
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":14,"skipped":63,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
Jul  9 10:43:02.339: INFO: Running AfterSuite actions on all nodes
Jul  9 10:43:02.339: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:43:02.339: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:43:02.339: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:43:02.339: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:43:02.339: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 42 lines ...
Jul  9 10:41:28.583: INFO: Running '/tmp/kubectl330794136/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6622 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.68.245.76:80 2>&1 || true; echo; done'
Jul  9 10:43:14.322: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.245.76:80\n+ echo\n"
Jul  9 10:43:14.322: INFO: stdout: "wget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nservice-proxy-toggled-9fcsk\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nservice-proxy-toggled-9fcsk\nservice-proxy-toggled-9fcsk\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nservice-proxy-toggled-9fcsk\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-9fcsk\n"
Jul  9 10:43:14.322: INFO: Unable to reach the following endpoints of service 100.68.245.76: map[service-proxy-toggled-6h7vz:{} service-proxy-toggled-8ttjs:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6622
STEP: Deleting pod verify-service-up-exec-pod-8f79f in namespace services-6622
Jul  9 10:43:19.434: FAIL: Unexpected error:
    <*errors.errorString | 0xc001928080>: {
        s: "service verification failed for: 100.68.245.76\nexpected [service-proxy-toggled-6h7vz service-proxy-toggled-8ttjs service-proxy-toggled-9fcsk]\nreceived [service-proxy-toggled-9fcsk wget: download timed out]",
    }
    service verification failed for: 100.68.245.76
    expected [service-proxy-toggled-6h7vz service-proxy-toggled-8ttjs service-proxy-toggled-9fcsk]
    received [service-proxy-toggled-9fcsk wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.28()
... skipping 216 lines ...
• Failure [341.840 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1886

  Jul  9 10:43:19.434: Unexpected error:
      <*errors.errorString | 0xc001928080>: {
          s: "service verification failed for: 100.68.245.76\nexpected [service-proxy-toggled-6h7vz service-proxy-toggled-8ttjs service-proxy-toggled-9fcsk]\nreceived [service-proxy-toggled-9fcsk wget: download timed out]",
      }
      service verification failed for: 100.68.245.76
      expected [service-proxy-toggled-6h7vz service-proxy-toggled-8ttjs service-proxy-toggled-9fcsk]
      received [service-proxy-toggled-9fcsk wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1910
------------------------------
{"msg":"FAILED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":25,"skipped":251,"failed":5,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
Jul  9 10:43:21.792: INFO: Running AfterSuite actions on all nodes
Jul  9 10:43:21.792: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:43:21.792: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:43:21.792: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:43:21.792: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:43:21.792: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
Jul  9 10:44:02.149: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Jul  9 10:44:02.149: INFO: Deleting pod "simpletest.rc-28wgl" in namespace "gc-8187"
Jul  9 10:44:02.206: INFO: Deleting pod "simpletest.rc-2xrc7" in namespace "gc-8187"
Jul  9 10:44:02.261: INFO: Deleting pod "simpletest.rc-8xhkh" in namespace "gc-8187"
Jul  9 10:44:02.314: INFO: Deleting pod "simpletest.rc-g9xcv" in namespace "gc-8187"
Jul  9 10:44:02.369: INFO: Deleting pod "simpletest.rc-jv5ck" in namespace "gc-8187"
Jul  9 10:44:02.424: INFO: Deleting pod "simpletest.rc-krh69" in namespace "gc-8187"
... skipping 10 lines ...
• [SLOW TEST:341.445 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":12,"skipped":157,"failed":1,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}
Jul  9 10:44:02.810: INFO: Running AfterSuite actions on all nodes
Jul  9 10:44:02.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:44:02.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:44:02.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:44:02.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:44:02.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  9 10:44:02.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  9 10:44:02.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":32,"skipped":204,"failed":1,"failures":["[sig-network] Services should be able to up and down services"]}
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  9 10:40:00.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
Jul  9 10:45:07.671: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  9 10:45:07.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5" for this suite.


• [SLOW TEST:306.895 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":33,"skipped":204,"failed":1,"failures":["[sig-network] Services should be able to up and down services"]}
Jul  9 10:45:07.781: INFO: Running AfterSuite actions on all nodes
Jul  9 10:45:07.781: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  9 10:45:07.781: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  9 10:45:07.781: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  9 10:45:07.781: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  9 10:45:07.781: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 174 lines ...
Jul  9 10:47:30.173: INFO: Still waiting for pvs of statefulset to disappear:
pvc-2ff79d6e-b6c1-4d8a-aedb-5abab46c89a8: {Phase:Released Message: Reason:}
Jul  9 10:47:40.173: INFO: Still waiting for pvs of statefulset to disappear:
pvc-2ff79d6e-b6c1-4d8a-aedb-5abab46c89a8: {Phase:Released Message: Reason:}
Jul  9 10:47:40.223: INFO: Still waiting for pvs of statefulset to disappear:
pvc-2ff79d6e-b6c1-4d8a-aedb-5abab46c89a8: {Phase:Released Message: Reason:}
Jul  9 10:47:40.223: FAIL: Unexpected error:
    <*errors.errorString | 0xc003e54790>: {
        s: "Timeout waiting for pv provisioner to delete pvs, this might mean the test leaked pvs.",
    }
    Timeout waiting for pv provisioner to delete pvs, this might mean the test leaked pvs.
occurred

... skipping 21 lines ...
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:36:56 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:37:00 +0000 UTC - event for datadir-ss-0: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-kggbl_3f1afca3-5d25-4429-9cbc-963947b5dbaa } ProvisioningSucceeded: Successfully provisioned volume pvc-2ff79d6e-b6c1-4d8a-aedb-5abab46c89a8
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:37:00 +0000 UTC - event for ss-0: {default-scheduler } Scheduled: Successfully assigned statefulset-3676/ss-0 to ip-172-20-54-0.us-west-1.compute.internal
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:37:03 +0000 UTC - event for ss-0: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-2ff79d6e-b6c1-4d8a-aedb-5abab46c89a8" 
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:37:19 +0000 UTC - event for ss-0: {kubelet ip-172-20-54-0.us-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:37:20 +0000 UTC - event for ss-0: {kubelet ip-172-20-54-0.us-west-1.compute.internal} Started: Started container webserver
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:37:20 +0000 UTC - event for ss-0: {kubelet ip-172-20-54-0.us-west-1.compute.internal} Unhealthy: Readiness probe failed: 
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:37:20 +0000 UTC - event for ss-0: {kubelet ip-172-20-54-0.us-west-1.compute.internal} Created: Created container webserver
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:37:28 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:37:29 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:37:30 +0000 UTC - event for ss-0: {kubelet ip-172-20-54-0.us-west-1.compute.internal} Unhealthy: Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state
Jul  9 10:47:40.274: INFO: At 2021-07-09 10:37:30 +0000 UTC - event for ss-0: {kubelet ip-172-20-54-0.us-west-1.compute.internal} Killing: Stopping container webserver
Jul  9 10:47:40.323: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  9 10:47:40.323: INFO: 
Jul  9 10:47:40.373: INFO: 
Logging node info for node ip-172-20-35-137.us-west-1.compute.internal
Jul  9 10:47:40.423: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-35-137.us-west-1.compute.internal    6fabd899-928d-426a-90dc-736e6c74bfd4 47494 0 2021-07-09 10:06:49 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-west-1 failure-domain.beta.kubernetes.io/zone:us-west-1a kops.k8s.io/instancegroup:master-us-west-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-35-137.us-west-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:us-west-1a topology.kubernetes.io/region:us-west-1 topology.kubernetes.io/zone:us-west-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09592cf4d4a52bd7d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{protokube Update v1 2021-07-09 10:06:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2021-07-09 10:07:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {kubelet Update v1 2021-07-09 10:07:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"Ready\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} } {aws-cloud-controller-manager Update v1 2021-07-09 10:07:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region