This job view page is being replaced by Spyglass soon. Check out the new job view.
PRolemarkus: Give kOps CLI knowledge about ASG warm pools
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-04-14 15:53
Elapsed13m1s
Revision39d7f2c8261735b813d8860d8c394f2fbc38a81a
Refs 11227

No Test Failures!


Error lines from build-log.txt

... skipping 448 lines ...
echo "https://storage.googleapis.com/kops-ci/pulls/pull-kops-e2e-cni-calico/pull-f057260770/1.21.0-alpha.4+f057260770" > /home/prow/go/src/k8s.io/kops/.bazelbuild/upload/latest-ci.txt
gsutil -h "Cache-Control:private, max-age=0, no-transform" cp /home/prow/go/src/k8s.io/kops/.bazelbuild/upload/latest-ci.txt gs://kops-ci/pulls/pull-kops-e2e-cni-calico/pull-f057260770
Copying file:///home/prow/go/src/k8s.io/kops/.bazelbuild/upload/latest-ci.txt [Content-Type=text/plain]...
/ [0 files][    0.0 B/  112.0 B]                                                
/ [1 files][  112.0 B/  112.0 B]                                                
Operation completed over 1 objects/112.0 B.                                      
I0414 15:58:30.923883    3027 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/04/14 15:58:30 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0414 15:58:30.936246    3027 http.go:37] curl https://ip.jsb.workers.dev
I0414 15:58:31.045224    3027 up.go:136] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-eccbde1062-51e6f.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210325 --channel=alpha --networking=calico --container-runtime=containerd --node-size=t3.large --admin-access 34.66.226.244/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-2a --master-size c5.large
I0414 15:58:31.061105   10646 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0414 15:58:31.061197   10646 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0414 15:58:31.118752   10646 create_cluster.go:730] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0414 15:58:31.631431   10646 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 34 lines ...

I0414 15:59:07.647638    3027 up.go:172] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops validate cluster --name e2e-eccbde1062-51e6f.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0414 15:59:07.662525   10665 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0414 15:59:07.662617   10665 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-eccbde1062-51e6f.test-cncf-aws.k8s.io

W0414 15:59:08.776032   10665 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-eccbde1062-51e6f.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 15:59:18.817052   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 15:59:28.871796   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 15:59:38.908090   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 15:59:48.969145   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 15:59:59.007717   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:00:09.050301   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:00:19.116840   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:00:29.154472   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:00:39.206648   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:00:49.375357   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:00:59.410469   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:01:09.450138   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:01:19.498420   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:01:29.555515   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:01:39.596497   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:01:49.640510   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:01:59.694420   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:02:09.732883   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:02:19.771516   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:02:29.810696   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0414 16:02:39.856223   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

... skipping 7 lines ...
Machine	i-0439e8b12a74a96f0				machine "i-0439e8b12a74a96f0" has not yet joined cluster
Machine	i-0bd6f89a388ccfe85				machine "i-0bd6f89a388ccfe85" has not yet joined cluster
Machine	i-0c0bf26f03948da6d				machine "i-0c0bf26f03948da6d" has not yet joined cluster
Pod	kube-system/coredns-66cbffdd77-j7rrs		system-cluster-critical pod "coredns-66cbffdd77-j7rrs" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-x9dqf	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-x9dqf" is pending

Validation Failed
W0414 16:02:51.819406   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

... skipping 7 lines ...
Machine	i-0439e8b12a74a96f0				machine "i-0439e8b12a74a96f0" has not yet joined cluster
Machine	i-0bd6f89a388ccfe85				machine "i-0bd6f89a388ccfe85" has not yet joined cluster
Machine	i-0c0bf26f03948da6d				machine "i-0c0bf26f03948da6d" has not yet joined cluster
Pod	kube-system/coredns-66cbffdd77-j7rrs		system-cluster-critical pod "coredns-66cbffdd77-j7rrs" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-x9dqf	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-x9dqf" is pending

Validation Failed
W0414 16:03:03.377442   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

... skipping 7 lines ...
Machine	i-0439e8b12a74a96f0				machine "i-0439e8b12a74a96f0" has not yet joined cluster
Machine	i-0bd6f89a388ccfe85				machine "i-0bd6f89a388ccfe85" has not yet joined cluster
Machine	i-0c0bf26f03948da6d				machine "i-0c0bf26f03948da6d" has not yet joined cluster
Pod	kube-system/coredns-66cbffdd77-j7rrs		system-cluster-critical pod "coredns-66cbffdd77-j7rrs" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-x9dqf	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-x9dqf" is pending

Validation Failed
W0414 16:03:14.858236   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

... skipping 12 lines ...
Pod	kube-system/calico-node-lztzs						system-node-critical pod "calico-node-lztzs" is pending
Pod	kube-system/calico-node-zw8bg						system-node-critical pod "calico-node-zw8bg" is pending
Pod	kube-system/coredns-66cbffdd77-j7rrs					system-cluster-critical pod "coredns-66cbffdd77-j7rrs" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-x9dqf				system-cluster-critical pod "coredns-autoscaler-6f594f4c58-x9dqf" is pending
Pod	kube-system/kube-proxy-ip-172-20-33-175.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-33-175.us-west-2.compute.internal" is pending

Validation Failed
W0414 16:03:26.233512   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

... skipping 16 lines ...
Pod	kube-system/calico-node-xzjvv						system-node-critical pod "calico-node-xzjvv" is pending
Pod	kube-system/calico-node-zw8bg						system-node-critical pod "calico-node-zw8bg" is pending
Pod	kube-system/coredns-66cbffdd77-j7rrs					system-cluster-critical pod "coredns-66cbffdd77-j7rrs" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-x9dqf				system-cluster-critical pod "coredns-autoscaler-6f594f4c58-x9dqf" is pending
Pod	kube-system/kube-proxy-ip-172-20-51-155.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-51-155.us-west-2.compute.internal" is pending

Validation Failed
W0414 16:03:37.694783   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

... skipping 11 lines ...
Node	ip-172-20-51-155.us-west-2.compute.internal	node "ip-172-20-51-155.us-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/calico-node-lztzs			system-node-critical pod "calico-node-lztzs" is not ready (calico-node)
Pod	kube-system/calico-node-pgstb			system-node-critical pod "calico-node-pgstb" is pending
Pod	kube-system/calico-node-xzjvv			system-node-critical pod "calico-node-xzjvv" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-x9dqf	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-x9dqf" is pending

Validation Failed
W0414 16:03:49.279679   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-lztzs	system-node-critical pod "calico-node-lztzs" is not ready (calico-node)
Pod	kube-system/calico-node-pgstb	system-node-critical pod "calico-node-pgstb" is not ready (calico-node)
Pod	kube-system/calico-node-xzjvv	system-node-critical pod "calico-node-xzjvv" is not ready (calico-node)

Validation Failed
W0414 16:04:00.651720   10665 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.large	4	4	us-west-2a

... skipping 803 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 155 lines ...
Apr 14 16:06:29.488: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.362 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142

  Only supported for providers [gce gke] (not skeleton)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 110 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 14 16:06:29.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3049" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:29.589: INFO: Only supported for providers [aws] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 14 16:06:31.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4137" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":1,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Apr 14 16:06:29.513: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-be146782-78cf-44f9-afbd-fca2c94711f5
STEP: Creating a pod to test consume secrets
Apr 14 16:06:29.774: INFO: Waiting up to 5m0s for pod "pod-secrets-a692dc32-2aa1-4d34-ae9e-25ec7ceccdb0" in namespace "secrets-5552" to be "Succeeded or Failed"
Apr 14 16:06:29.836: INFO: Pod "pod-secrets-a692dc32-2aa1-4d34-ae9e-25ec7ceccdb0": Phase="Pending", Reason="", readiness=false. Elapsed: 62.150438ms
Apr 14 16:06:31.905: INFO: Pod "pod-secrets-a692dc32-2aa1-4d34-ae9e-25ec7ceccdb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130903082s
Apr 14 16:06:33.968: INFO: Pod "pod-secrets-a692dc32-2aa1-4d34-ae9e-25ec7ceccdb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194478988s
Apr 14 16:06:36.032: INFO: Pod "pod-secrets-a692dc32-2aa1-4d34-ae9e-25ec7ceccdb0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257784714s
Apr 14 16:06:38.095: INFO: Pod "pod-secrets-a692dc32-2aa1-4d34-ae9e-25ec7ceccdb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.320915888s
STEP: Saw pod success
Apr 14 16:06:38.095: INFO: Pod "pod-secrets-a692dc32-2aa1-4d34-ae9e-25ec7ceccdb0" satisfied condition "Succeeded or Failed"
Apr 14 16:06:38.157: INFO: Trying to get logs from node ip-172-20-54-22.us-west-2.compute.internal pod pod-secrets-a692dc32-2aa1-4d34-ae9e-25ec7ceccdb0 container secret-volume-test: <nil>
STEP: delete the pod
Apr 14 16:06:38.306: INFO: Waiting for pod pod-secrets-a692dc32-2aa1-4d34-ae9e-25ec7ceccdb0 to disappear
Apr 14 16:06:38.368: INFO: Pod pod-secrets-a692dc32-2aa1-4d34-ae9e-25ec7ceccdb0 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.419 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:38.587: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 178 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:42.232: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 115 lines ...
• [SLOW TEST:14.393 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:42.534: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 48 lines ...
W0414 16:06:30.636521   11401 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 14 16:06:30.636: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Apr 14 16:06:30.866: INFO: Waiting up to 5m0s for pod "security-context-f4635b65-8beb-41e1-9a19-196f90500004" in namespace "security-context-2748" to be "Succeeded or Failed"
Apr 14 16:06:30.939: INFO: Pod "security-context-f4635b65-8beb-41e1-9a19-196f90500004": Phase="Pending", Reason="", readiness=false. Elapsed: 72.767555ms
Apr 14 16:06:33.005: INFO: Pod "security-context-f4635b65-8beb-41e1-9a19-196f90500004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139033959s
Apr 14 16:06:35.072: INFO: Pod "security-context-f4635b65-8beb-41e1-9a19-196f90500004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205493367s
Apr 14 16:06:37.135: INFO: Pod "security-context-f4635b65-8beb-41e1-9a19-196f90500004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268526277s
Apr 14 16:06:39.214: INFO: Pod "security-context-f4635b65-8beb-41e1-9a19-196f90500004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.34785976s
Apr 14 16:06:41.277: INFO: Pod "security-context-f4635b65-8beb-41e1-9a19-196f90500004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.410532642s
STEP: Saw pod success
Apr 14 16:06:41.277: INFO: Pod "security-context-f4635b65-8beb-41e1-9a19-196f90500004" satisfied condition "Succeeded or Failed"
Apr 14 16:06:41.382: INFO: Trying to get logs from node ip-172-20-51-155.us-west-2.compute.internal pod security-context-f4635b65-8beb-41e1-9a19-196f90500004 container test-container: <nil>
STEP: delete the pod
Apr 14 16:06:42.149: INFO: Waiting for pod security-context-f4635b65-8beb-41e1-9a19-196f90500004 to disappear
Apr 14 16:06:42.257: INFO: Pod security-context-f4635b65-8beb-41e1-9a19-196f90500004 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.381 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:42.662: INFO: Driver local doesn't support ext3 -- skipping
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR for CRD with validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1001
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:15.370 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:43.466: INFO: Only supported for providers [vsphere] (not skeleton)
... skipping 28 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
Apr 14 16:06:29.958: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Apr 14 16:06:30.084: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9trn
STEP: Creating a pod to test subpath
Apr 14 16:06:30.162: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9trn" in namespace "provisioning-1801" to be "Succeeded or Failed"
Apr 14 16:06:30.225: INFO: Pod "pod-subpath-test-inlinevolume-9trn": Phase="Pending", Reason="", readiness=false. Elapsed: 62.910978ms
Apr 14 16:06:32.292: INFO: Pod "pod-subpath-test-inlinevolume-9trn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129929463s
Apr 14 16:06:34.356: INFO: Pod "pod-subpath-test-inlinevolume-9trn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193821292s
Apr 14 16:06:36.420: INFO: Pod "pod-subpath-test-inlinevolume-9trn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258153073s
Apr 14 16:06:38.485: INFO: Pod "pod-subpath-test-inlinevolume-9trn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.322503461s
Apr 14 16:06:40.549: INFO: Pod "pod-subpath-test-inlinevolume-9trn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.38628285s
Apr 14 16:06:42.664: INFO: Pod "pod-subpath-test-inlinevolume-9trn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.501575447s
STEP: Saw pod success
Apr 14 16:06:42.664: INFO: Pod "pod-subpath-test-inlinevolume-9trn" satisfied condition "Succeeded or Failed"
Apr 14 16:06:42.739: INFO: Trying to get logs from node ip-172-20-51-155.us-west-2.compute.internal pod pod-subpath-test-inlinevolume-9trn container test-container-subpath-inlinevolume-9trn: <nil>
STEP: delete the pod
Apr 14 16:06:42.921: INFO: Waiting for pod pod-subpath-test-inlinevolume-9trn to disappear
Apr 14 16:06:43.049: INFO: Pod pod-subpath-test-inlinevolume-9trn no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9trn
Apr 14 16:06:43.049: INFO: Deleting pod "pod-subpath-test-inlinevolume-9trn" in namespace "provisioning-1801"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:43.479: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 14 16:06:43.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7213" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":2,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:43.670: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 204 lines ...
• [SLOW TEST:6.124 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:44.814: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 90 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:45.355: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 118 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Apr 14 16:06:32.502: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Apr 14 16:06:32.502: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-c7m8
STEP: Creating a pod to test subpath
Apr 14 16:06:32.570: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-c7m8" in namespace "provisioning-9006" to be "Succeeded or Failed"
Apr 14 16:06:32.632: INFO: Pod "pod-subpath-test-inlinevolume-c7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 61.626556ms
Apr 14 16:06:34.695: INFO: Pod "pod-subpath-test-inlinevolume-c7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124741091s
Apr 14 16:06:36.757: INFO: Pod "pod-subpath-test-inlinevolume-c7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186626466s
Apr 14 16:06:38.819: INFO: Pod "pod-subpath-test-inlinevolume-c7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248717603s
Apr 14 16:06:40.882: INFO: Pod "pod-subpath-test-inlinevolume-c7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.311616983s
Apr 14 16:06:42.980: INFO: Pod "pod-subpath-test-inlinevolume-c7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.409638991s
Apr 14 16:06:45.045: INFO: Pod "pod-subpath-test-inlinevolume-c7m8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.474722941s
STEP: Saw pod success
Apr 14 16:06:45.045: INFO: Pod "pod-subpath-test-inlinevolume-c7m8" satisfied condition "Succeeded or Failed"
Apr 14 16:06:45.107: INFO: Trying to get logs from node ip-172-20-54-22.us-west-2.compute.internal pod pod-subpath-test-inlinevolume-c7m8 container test-container-volume-inlinevolume-c7m8: <nil>
STEP: delete the pod
Apr 14 16:06:45.262: INFO: Waiting for pod pod-subpath-test-inlinevolume-c7m8 to disappear
Apr 14 16:06:45.329: INFO: Pod pod-subpath-test-inlinevolume-c7m8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-c7m8
Apr 14 16:06:45.330: INFO: Deleting pod "pod-subpath-test-inlinevolume-c7m8" in namespace "provisioning-9006"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a read only busybox container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:49.737: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:49.824: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 149 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:52.350: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 31 lines ...
STEP: Wait for the deployment to be ready
Apr 14 16:06:44.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754013204, loc:(*time.Location)(0x99ba6e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754013204, loc:(*time.Location)(0x99ba6e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754013204, loc:(*time.Location)(0x99ba6e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754013204, loc:(*time.Location)(0x99ba6e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-9b98b44d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 14 16:06:46.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754013204, loc:(*time.Location)(0x99ba6e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754013204, loc:(*time.Location)(0x99ba6e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754013204, loc:(*time.Location)(0x99ba6e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754013204, loc:(*time.Location)(0x99ba6e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-9b98b44d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 14 16:06:50.078: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 14 16:06:51.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2614" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:9.744 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:52.911: INFO: Only supported for providers [azure] (not skeleton)
... skipping 75 lines ...
• [SLOW TEST:15.166 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:56.192: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 14 16:06:43.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:14.990 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:920
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:06:58.775: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 90 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSSS{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-04-14T16:07:00Z"}

------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
... skipping 21 lines ...
Apr 14 16:06:48.556: INFO: PersistentVolumeClaim pvc-g4kkc found but phase is Pending instead of Bound.
Apr 14 16:06:50.649: INFO: PersistentVolumeClaim pvc-g4kkc found and phase=Bound (12.539640812s)
Apr 14 16:06:50.649: INFO: Waiting up to 3m0s for PersistentVolume local-8v7vn to have phase Bound
Apr 14 16:06:50.733: INFO: PersistentVolume local-8v7vn found and phase=Bound (84.064017ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tcmk
STEP: Creating a pod to test subpath
Apr 14 16:06:50.950: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tcmk" in namespace "provisioning-2338" to be "Succeeded or Failed"
Apr 14 16:06:51.017: INFO: Pod "pod-subpath-test-preprovisionedpv-tcmk": Phase="Pending", Reason="", readiness=false. Elapsed: 66.572162ms
Apr 14 16:06:53.119: INFO: Pod "pod-subpath-test-preprovisionedpv-tcmk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169083588s
Apr 14 16:06:55.189: INFO: Pod "pod-subpath-test-preprovisionedpv-tcmk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238752335s
Apr 14 16:06:57.274: INFO: Pod "pod-subpath-test-preprovisionedpv-tcmk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323560497s
Apr 14 16:06:59.373: INFO: Pod "pod-subpath-test-preprovisionedpv-tcmk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.422661777s
STEP: Saw pod success
Apr 14 16:06:59.373: INFO: Pod "pod-subpath-test-preprovisionedpv-tcmk" satisfied condition "Succeeded or Failed"
Apr 14 16:06:59.487: INFO: Trying to get logs from node ip-172-20-51-155.us-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-tcmk container test-container-volume-preprovisionedpv-tcmk: <nil>
STEP: delete the pod
Apr 14 16:06:59.776: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tcmk to disappear
Apr 14 16:06:59.837: INFO: Pod pod-subpath-test-preprovisionedpv-tcmk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tcmk
Apr 14 16:06:59.837: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tcmk" in namespace "provisioning-2338"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:07:01.106: INFO: Driver local doesn't support ext3 -- skipping
... skipping 23 lines ...
Apr 14 16:06:49.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Apr 14 16:06:50.356: INFO: Waiting up to 5m0s for pod "var-expansion-40b4aa58-2af3-461d-ab5f-61ebb89f7955" in namespace "var-expansion-9640" to be "Succeeded or Failed"
Apr 14 16:06:50.459: INFO: Pod "var-expansion-40b4aa58-2af3-461d-ab5f-61ebb89f7955": Phase="Pending", Reason="", readiness=false. Elapsed: 103.446441ms
Apr 14 16:06:52.688: INFO: Pod "var-expansion-40b4aa58-2af3-461d-ab5f-61ebb89f7955": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331657607s
Apr 14 16:06:54.781: INFO: Pod "var-expansion-40b4aa58-2af3-461d-ab5f-61ebb89f7955": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424861721s
Apr 14 16:06:56.844: INFO: Pod "var-expansion-40b4aa58-2af3-461d-ab5f-61ebb89f7955": Phase="Pending", Reason="", readiness=false. Elapsed: 6.488140733s
Apr 14 16:06:58.912: INFO: Pod "var-expansion-40b4aa58-2af3-461d-ab5f-61ebb89f7955": Phase="Pending", Reason="", readiness=false. Elapsed: 8.556319847s
Apr 14 16:07:00.974: INFO: Pod "var-expansion-40b4aa58-2af3-461d-ab5f-61ebb89f7955": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.617984885s
STEP: Saw pod success
Apr 14 16:07:00.974: INFO: Pod "var-expansion-40b4aa58-2af3-461d-ab5f-61ebb89f7955" satisfied condition "Succeeded or Failed"
Apr 14 16:07:01.035: INFO: Trying to get logs from node ip-172-20-51-155.us-west-2.compute.internal pod var-expansion-40b4aa58-2af3-461d-ab5f-61ebb89f7955 container dapi-container: <nil>
STEP: delete the pod
Apr 14 16:07:01.186: INFO: Waiting for pod var-expansion-40b4aa58-2af3-461d-ab5f-61ebb89f7955 to disappear
Apr 14 16:07:01.249: INFO: Pod var-expansion-40b4aa58-2af3-461d-ab5f-61ebb89f7955 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 14 16:07:01.683: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 195 lines ...