This job view page is being replaced by Spyglass soon. Check out the new job view.
PRolemarkus: Pre-install nvidia container runtime + drivers on GPU instances
ResultABORTED
Tests 0 failed / 0 succeeded
Started2021-07-18 12:49
Elapsed28m55s
Revisionf8ac8786eb566da1ecbc72bd1c45edbc83320bf7
Refs 11628

No Test Failures!


Error lines from build-log.txt

... skipping 493 lines ...
Operation completed over 1 objects/155.0 B.                                      
I0718 12:54:11.955230    4247 copy.go:30] cp /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops /logs/artifacts/81156058-e7c6-11eb-8f34-525df2f63eb1/kops
I0718 12:54:12.132469    4247 up.go:43] Cleaning up any leaked resources from previous cluster
I0718 12:54:12.132522    4247 dumplogs.go:38] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops toolbox dump --name e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0718 12:54:12.149858   12014 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0718 12:54:12.149992   12014 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io" not found

Cluster.kops.k8s.io "e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io" not found
W0718 12:54:12.648621    4247 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0718 12:54:12.648696    4247 down.go:48] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops delete cluster --name e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io --yes
I0718 12:54:12.665278   12024 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0718 12:54:12.665848   12024 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io" not found

error reading cluster configuration: Cluster.kops.k8s.io "e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io" not found
I0718 12:54:13.133085    4247 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/07/18 12:54:13 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0718 12:54:13.142828    4247 http.go:37] curl https://ip.jsb.workers.dev
I0718 12:54:13.235570    4247 up.go:144] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.3 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210621 --channel=alpha --networking=amazonvpc --container-runtime=containerd --node-size=t3.large --admin-access 34.123.225.205/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-southeast-2a --master-size c5.large
I0718 12:54:13.253627   12034 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0718 12:54:13.254451   12034 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0718 12:54:13.298206   12034 create_cluster.go:825] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0718 12:54:13.772523   12034 new_cluster.go:1054]  Cloud Provider ID = aws
... skipping 42 lines ...

I0718 12:54:43.387282    4247 up.go:181] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops validate cluster --name e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0718 12:54:43.404593   12055 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0718 12:54:43.404754   12055 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io

W0718 12:54:44.996764   12055 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:54:55.041526   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:55:05.076286   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:55:15.123668   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:55:25.159360   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
W0718 12:55:35.178571   12055 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:55:45.215643   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:55:55.253638   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:56:05.289507   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:56:15.335846   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:56:25.368385   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:56:35.401129   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:56:45.428994   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:56:55.466594   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:57:05.501411   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:57:15.541225   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:57:25.579450   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:57:35.626640   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
W0718 12:57:45.644832   12055 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:57:55.678464   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:58:05.724997   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:58:15.768753   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:58:25.797394   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:58:35.860180   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0718 12:58:45.892627   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

... skipping 16 lines ...
Pod	kube-system/aws-node-mvn6t						system-node-critical pod "aws-node-mvn6t" is pending
Pod	kube-system/aws-node-q8m9n						system-node-critical pod "aws-node-q8m9n" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-p6x22				system-cluster-critical pod "coredns-autoscaler-6f594f4c58-p6x22" is pending
Pod	kube-system/coredns-f45c4bf76-g2ksq					system-cluster-critical pod "coredns-f45c4bf76-g2ksq" is pending
Pod	kube-system/kube-proxy-ip-172-20-54-233.ap-southeast-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-54-233.ap-southeast-2.compute.internal" is pending

Validation Failed
W0718 12:59:00.216987   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

... skipping 14 lines ...
Pod	kube-system/aws-node-dmrm6				system-node-critical pod "aws-node-dmrm6" is pending
Pod	kube-system/aws-node-mvn6t				system-node-critical pod "aws-node-mvn6t" is pending
Pod	kube-system/aws-node-q8m9n				system-node-critical pod "aws-node-q8m9n" is not ready (aws-node)
Pod	kube-system/coredns-autoscaler-6f594f4c58-p6x22		system-cluster-critical pod "coredns-autoscaler-6f594f4c58-p6x22" is pending
Pod	kube-system/coredns-f45c4bf76-g2ksq			system-cluster-critical pod "coredns-f45c4bf76-g2ksq" is pending

Validation Failed
W0718 12:59:13.307504   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

... skipping 12 lines ...
Node	ip-172-20-58-3.ap-southeast-2.compute.internal		node "ip-172-20-58-3.ap-southeast-2.compute.internal" of role "node" is not ready
Pod	kube-system/aws-node-dmrm6				system-node-critical pod "aws-node-dmrm6" is not ready (aws-node)
Pod	kube-system/aws-node-mvn6t				system-node-critical pod "aws-node-mvn6t" is not ready (aws-node)
Pod	kube-system/coredns-f45c4bf76-g2ksq			system-cluster-critical pod "coredns-f45c4bf76-g2ksq" is pending
Pod	kube-system/coredns-f45c4bf76-grflb			system-cluster-critical pod "coredns-f45c4bf76-grflb" is pending

Validation Failed
W0718 12:59:26.392647   12055 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

... skipping 436 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 425 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 18 13:02:07.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-5876" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:02:08.564: INFO: Only supported for providers [gce gke] (not aws)
... skipping 62 lines ...
Jul 18 13:02:09.313: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [2.788 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 103 lines ...
STEP: Destroying namespace "services-4262" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [sig-windows] Hybrid cluster network
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jul 18 13:02:09.690: INFO: Only supported for node OS distro [windows] (not debian)
[AfterEach] [sig-windows] Hybrid cluster network
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 17 lines ...
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 18 13:02:09.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Jul 18 13:02:10.858: INFO: found topology map[topology.kubernetes.io/zone:ap-southeast-2a]
Jul 18 13:02:10.858: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jul 18 13:02:10.858: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 18 13:02:13.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7632" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0718 13:02:08.825690   12668 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 18 13:02:08.825: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 18 13:02:09.397: INFO: Waiting up to 5m0s for pod "pod-dff32d26-cd89-4fa0-bfb1-3936cd8dcf28" in namespace "emptydir-6685" to be "Succeeded or Failed"
Jul 18 13:02:09.587: INFO: Pod "pod-dff32d26-cd89-4fa0-bfb1-3936cd8dcf28": Phase="Pending", Reason="", readiness=false. Elapsed: 189.745413ms
Jul 18 13:02:11.776: INFO: Pod "pod-dff32d26-cd89-4fa0-bfb1-3936cd8dcf28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378775205s
Jul 18 13:02:13.966: INFO: Pod "pod-dff32d26-cd89-4fa0-bfb1-3936cd8dcf28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568831044s
Jul 18 13:02:16.157: INFO: Pod "pod-dff32d26-cd89-4fa0-bfb1-3936cd8dcf28": Phase="Pending", Reason="", readiness=false. Elapsed: 6.758983059s
Jul 18 13:02:18.347: INFO: Pod "pod-dff32d26-cd89-4fa0-bfb1-3936cd8dcf28": Phase="Pending", Reason="", readiness=false. Elapsed: 8.949486976s
Jul 18 13:02:20.537: INFO: Pod "pod-dff32d26-cd89-4fa0-bfb1-3936cd8dcf28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.139145651s
STEP: Saw pod success
Jul 18 13:02:20.537: INFO: Pod "pod-dff32d26-cd89-4fa0-bfb1-3936cd8dcf28" satisfied condition "Succeeded or Failed"
Jul 18 13:02:20.726: INFO: Trying to get logs from node ip-172-20-33-101.ap-southeast-2.compute.internal pod pod-dff32d26-cd89-4fa0-bfb1-3936cd8dcf28 container test-container: <nil>
STEP: delete the pod
Jul 18 13:02:21.126: INFO: Waiting for pod pod-dff32d26-cd89-4fa0-bfb1-3936cd8dcf28 to disappear
Jul 18 13:02:21.315: INFO: Pod pod-dff32d26-cd89-4fa0-bfb1-3936cd8dcf28 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:15.325 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:02:21.891: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 91 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
STEP: Creating a pod to test downward API volume plugin
Jul 18 13:02:08.208: INFO: Waiting up to 5m0s for pod "metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323" in namespace "downward-api-6076" to be "Succeeded or Failed"
Jul 18 13:02:08.398: INFO: Pod "metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323": Phase="Pending", Reason="", readiness=false. Elapsed: 189.528042ms
Jul 18 13:02:10.588: INFO: Pod "metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379763934s
Jul 18 13:02:12.778: INFO: Pod "metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569324312s
Jul 18 13:02:14.969: INFO: Pod "metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323": Phase="Pending", Reason="", readiness=false. Elapsed: 6.76031453s
Jul 18 13:02:17.161: INFO: Pod "metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323": Phase="Pending", Reason="", readiness=false. Elapsed: 8.953107844s
Jul 18 13:02:19.353: INFO: Pod "metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323": Phase="Pending", Reason="", readiness=false. Elapsed: 11.144686347s
Jul 18 13:02:21.545: INFO: Pod "metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323": Phase="Pending", Reason="", readiness=false. Elapsed: 13.337057349s
Jul 18 13:02:23.736: INFO: Pod "metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.527992764s
STEP: Saw pod success
Jul 18 13:02:23.736: INFO: Pod "metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323" satisfied condition "Succeeded or Failed"
Jul 18 13:02:23.927: INFO: Trying to get logs from node ip-172-20-41-66.ap-southeast-2.compute.internal pod metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323 container client-container: <nil>
STEP: delete the pod
Jul 18 13:02:24.324: INFO: Waiting for pod metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323 to disappear
Jul 18 13:02:24.516: INFO: Pod metadata-volume-5afea3cd-82e6-4b91-9aaf-707f30020323 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 26 lines ...
Jul 18 13:02:15.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 18 13:02:17.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 18 13:02:19.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63762210130, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 18 13:02:22.760: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 18 13:02:24.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-532" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:19.079 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":1,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
• [SLOW TEST:20.512 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:02:26.998: INFO: Driver emptydir doesn't support ext4 -- skipping
... skipping 46 lines ...
• [SLOW TEST:23.264 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
Jul 18 13:02:10.331: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul 18 13:02:10.331: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-gl89
STEP: Creating a pod to test subpath
Jul 18 13:02:10.521: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-gl89" in namespace "provisioning-1229" to be "Succeeded or Failed"
Jul 18 13:02:10.710: INFO: Pod "pod-subpath-test-inlinevolume-gl89": Phase="Pending", Reason="", readiness=false. Elapsed: 188.133702ms
Jul 18 13:02:12.899: INFO: Pod "pod-subpath-test-inlinevolume-gl89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378004599s
Jul 18 13:02:15.093: INFO: Pod "pod-subpath-test-inlinevolume-gl89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571968539s
Jul 18 13:02:17.283: INFO: Pod "pod-subpath-test-inlinevolume-gl89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.761155484s
Jul 18 13:02:19.472: INFO: Pod "pod-subpath-test-inlinevolume-gl89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.950719692s
Jul 18 13:02:21.662: INFO: Pod "pod-subpath-test-inlinevolume-gl89": Phase="Pending", Reason="", readiness=false. Elapsed: 11.140279315s
Jul 18 13:02:23.854: INFO: Pod "pod-subpath-test-inlinevolume-gl89": Phase="Pending", Reason="", readiness=false. Elapsed: 13.332333126s
Jul 18 13:02:26.043: INFO: Pod "pod-subpath-test-inlinevolume-gl89": Phase="Pending", Reason="", readiness=false. Elapsed: 15.521958266s
Jul 18 13:02:28.233: INFO: Pod "pod-subpath-test-inlinevolume-gl89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.711308438s
STEP: Saw pod success
Jul 18 13:02:28.233: INFO: Pod "pod-subpath-test-inlinevolume-gl89" satisfied condition "Succeeded or Failed"
Jul 18 13:02:28.421: INFO: Trying to get logs from node ip-172-20-41-66.ap-southeast-2.compute.internal pod pod-subpath-test-inlinevolume-gl89 container test-container-subpath-inlinevolume-gl89: <nil>
STEP: delete the pod
Jul 18 13:02:28.803: INFO: Waiting for pod pod-subpath-test-inlinevolume-gl89 to disappear
Jul 18 13:02:28.991: INFO: Pod pod-subpath-test-inlinevolume-gl89 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-gl89
Jul 18 13:02:28.991: INFO: Deleting pod "pod-subpath-test-inlinevolume-gl89" in namespace "provisioning-1229"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":15,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:02:29.782: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 166 lines ...
Jul 18 13:02:26.109: INFO: Creating a PV followed by a PVC
Jul 18 13:02:26.488: INFO: Waiting for PV local-pvwz5lx to bind to PVC pvc-dxc44
Jul 18 13:02:26.488: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dxc44] to have phase Bound
Jul 18 13:02:26.677: INFO: PersistentVolumeClaim pvc-dxc44 found and phase=Bound (189.088132ms)
Jul 18 13:02:26.677: INFO: Waiting up to 3m0s for PersistentVolume local-pvwz5lx to have phase Bound
Jul 18 13:02:26.867: INFO: PersistentVolume local-pvwz5lx found and phase=Bound (189.286099ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jul 18 13:02:27.247: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-128842ad-85a1-4e84-9e79-ed261fb5a666] Namespace:persistent-local-volumes-test-9201 PodName:hostexec-ip-172-20-54-233.ap-southeast-2.compute.internal-hsknr ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jul 18 13:02:27.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:25.735 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 18 13:02:06.475: INFO: >>> kubeConfig: /root/.kube/config
... skipping 42 lines ...
• [SLOW TEST:25.912 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:318
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:18.706 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:02:32.884: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 210 lines ...
Jul 18 13:02:29.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul 18 13:02:30.892: INFO: Waiting up to 5m0s for pod "downward-api-dbecceec-76be-4c70-9133-c1a48e5a8814" in namespace "downward-api-1093" to be "Succeeded or Failed"
Jul 18 13:02:31.082: INFO: Pod "downward-api-dbecceec-76be-4c70-9133-c1a48e5a8814": Phase="Pending", Reason="", readiness=false. Elapsed: 189.934665ms
Jul 18 13:02:33.279: INFO: Pod "downward-api-dbecceec-76be-4c70-9133-c1a48e5a8814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.386961405s
STEP: Saw pod success
Jul 18 13:02:33.279: INFO: Pod "downward-api-dbecceec-76be-4c70-9133-c1a48e5a8814" satisfied condition "Succeeded or Failed"
Jul 18 13:02:33.470: INFO: Trying to get logs from node ip-172-20-58-3.ap-southeast-2.compute.internal pod downward-api-dbecceec-76be-4c70-9133-c1a48e5a8814 container dapi-container: <nil>
STEP: delete the pod
Jul 18 13:02:33.891: INFO: Waiting for pod downward-api-dbecceec-76be-4c70-9133-c1a48e5a8814 to disappear
Jul 18 13:02:34.081: INFO: Pod downward-api-dbecceec-76be-4c70-9133-c1a48e5a8814 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 18 13:02:34.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1093" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 18 13:02:34.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6188" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:02:34.939: INFO: Only supported for providers [openstack] (not aws)
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 18 13:02:34.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-214" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:02:35.268: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 74 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 47 lines ...
Jul 18 13:02:22.692: INFO: PersistentVolumeClaim pvc-dkjbn found and phase=Bound (188.942587ms)
Jul 18 13:02:22.692: INFO: Waiting up to 3m0s for PersistentVolume nfs-vhvcc to have phase Bound
Jul 18 13:02:22.881: INFO: PersistentVolume nfs-vhvcc found and phase=Bound (189.136155ms)
STEP: Checking pod has write access to PersistentVolume
Jul 18 13:02:23.259: INFO: Creating nfs test pod
Jul 18 13:02:23.450: INFO: Pod should terminate with exitcode 0 (success)
Jul 18 13:02:23.450: INFO: Waiting up to 5m0s for pod "pvc-tester-tbj5s" in namespace "pv-7605" to be "Succeeded or Failed"
Jul 18 13:02:23.641: INFO: Pod "pvc-tester-tbj5s": Phase="Pending", Reason="", readiness=false. Elapsed: 191.643892ms
Jul 18 13:02:25.831: INFO: Pod "pvc-tester-tbj5s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381769181s
Jul 18 13:02:28.021: INFO: Pod "pvc-tester-tbj5s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571292041s
Jul 18 13:02:30.212: INFO: Pod "pvc-tester-tbj5s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.762159288s
Jul 18 13:02:32.402: INFO: Pod "pvc-tester-tbj5s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.952701621s
Jul 18 13:02:34.592: INFO: Pod "pvc-tester-tbj5s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.142060943s
STEP: Saw pod success
Jul 18 13:02:34.592: INFO: Pod "pvc-tester-tbj5s" satisfied condition "Succeeded or Failed"
Jul 18 13:02:34.592: INFO: Pod pvc-tester-tbj5s succeeded 
Jul 18 13:02:34.592: INFO: Deleting pod "pvc-tester-tbj5s" in namespace "pv-7605"
Jul 18 13:02:34.822: INFO: Wait up to 5m0s for pod "pvc-tester-tbj5s" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jul 18 13:02:35.013: INFO: Deleting PVC pvc-dkjbn to trigger reclamation of PV 
Jul 18 13:02:35.013: INFO: Deleting PersistentVolumeClaim "pvc-dkjbn"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":1,"skipped":9,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:02:47.142: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 71 lines ...
• [SLOW TEST:17.827 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update/patch PodDisruptionBudget status [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:46.726 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jul 18 13:02:34.321: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 18 13:02:34.510: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-pk6h
STEP: Creating a pod to test subpath
Jul 18 13:02:34.701: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-pk6h" in namespace "provisioning-2422" to be "Succeeded or Failed"
Jul 18 13:02:34.898: INFO: Pod "pod-subpath-test-inlinevolume-pk6h": Phase="Pending", Reason="", readiness=false. Elapsed: 196.187506ms
Jul 18 13:02:37.088: INFO: Pod "pod-subpath-test-inlinevolume-pk6h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386403354s
Jul 18 13:02:39.278: INFO: Pod "pod-subpath-test-inlinevolume-pk6h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576517368s
Jul 18 13:02:41.468: INFO: Pod "pod-subpath-test-inlinevolume-pk6h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.766695165s
Jul 18 13:02:43.658: INFO: Pod "pod-subpath-test-inlinevolume-pk6h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.956786101s
Jul 18 13:02:45.848: INFO: Pod "pod-subpath-test-inlinevolume-pk6h": Phase="Pending", Reason="", readiness=false. Elapsed: 11.146249237s
Jul 18 13:02:48.038: INFO: Pod "pod-subpath-test-inlinevolume-pk6h": Phase="Pending", Reason="", readiness=false. Elapsed: 13.336485512s
Jul 18 13:02:50.228: INFO: Pod "pod-subpath-test-inlinevolume-pk6h": Phase="Pending", Reason="", readiness=false. Elapsed: 15.526682695s
Jul 18 13:02:52.419: INFO: Pod "pod-subpath-test-inlinevolume-pk6h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.717257992s
STEP: Saw pod success
Jul 18 13:02:52.419: INFO: Pod "pod-subpath-test-inlinevolume-pk6h" satisfied condition "Succeeded or Failed"
Jul 18 13:02:52.608: INFO: Trying to get logs from node ip-172-20-54-233.ap-southeast-2.compute.internal pod pod-subpath-test-inlinevolume-pk6h container test-container-subpath-inlinevolume-pk6h: <nil>
STEP: delete the pod
Jul 18 13:02:53.007: INFO: Waiting for pod pod-subpath-test-inlinevolume-pk6h to disappear
Jul 18 13:02:53.196: INFO: Pod pod-subpath-test-inlinevolume-pk6h no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-pk6h
Jul 18 13:02:53.196: INFO: Deleting pod "pod-subpath-test-inlinevolume-pk6h" in namespace "provisioning-2422"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":20,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 11 lines ...
Jul 18 13:02:07.571: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-75216rjs4
STEP: creating a claim
Jul 18 13:02:07.760: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-mmkl
STEP: Creating a pod to test exec-volume-test
Jul 18 13:02:08.335: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-mmkl" in namespace "volume-7521" to be "Succeeded or Failed"
Jul 18 13:02:08.524: INFO: Pod "exec-volume-test-dynamicpv-mmkl": Phase="Pending", Reason="", readiness=false. Elapsed: 188.74192ms
Jul 18 13:02:10.714: INFO: Pod "exec-volume-test-dynamicpv-mmkl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378922704s
Jul 18 13:02:12.904: INFO: Pod "exec-volume-test-dynamicpv-mmkl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568422244s
Jul 18 13:02:15.094: INFO: Pod "exec-volume-test-dynamicpv-mmkl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.758421976s
Jul 18 13:02:17.283: INFO: Pod "exec-volume-test-dynamicpv-mmkl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.947794776s
Jul 18 13:02:19.473: INFO: Pod "exec-volume-test-dynamicpv-mmkl": Phase="Pending", Reason="", readiness=false. Elapsed: 11.137836967s
... skipping 6 lines ...
Jul 18 13:02:34.831: INFO: Pod "exec-volume-test-dynamicpv-mmkl": Phase="Pending", Reason="", readiness=false. Elapsed: 26.495754345s
Jul 18 13:02:37.021: INFO: Pod "exec-volume-test-dynamicpv-mmkl": Phase="Pending", Reason="", readiness=false. Elapsed: 28.68560201s
Jul 18 13:02:39.211: INFO: Pod "exec-volume-test-dynamicpv-mmkl": Phase="Pending", Reason="", readiness=false. Elapsed: 30.875019046s
Jul 18 13:02:41.400: INFO: Pod "exec-volume-test-dynamicpv-mmkl": Phase="Pending", Reason="", readiness=false. Elapsed: 33.064208756s
Jul 18 13:02:43.589: INFO: Pod "exec-volume-test-dynamicpv-mmkl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.253687143s
STEP: Saw pod success
Jul 18 13:02:43.589: INFO: Pod "exec-volume-test-dynamicpv-mmkl" satisfied condition "Succeeded or Failed"
Jul 18 13:02:43.786: INFO: Trying to get logs from node ip-172-20-33-101.ap-southeast-2.compute.internal pod exec-volume-test-dynamicpv-mmkl container exec-container-dynamicpv-mmkl: <nil>
STEP: delete the pod
Jul 18 13:02:44.175: INFO: Waiting for pod exec-volume-test-dynamicpv-mmkl to disappear
Jul 18 13:02:44.364: INFO: Pod exec-volume-test-dynamicpv-mmkl no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-mmkl
Jul 18 13:02:44.364: INFO: Deleting pod "exec-volume-test-dynamicpv-mmkl" in namespace "volume-7521"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:02:56.489: INFO: Only supported for providers [openstack] (not aws)
... skipping 157 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 77 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jul 18 13:02:35.452: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul 18 13:02:35.452: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-j268
STEP: Creating a pod to test subpath
Jul 18 13:02:35.644: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-j268" in namespace "provisioning-3002" to be "Succeeded or Failed"
Jul 18 13:02:35.834: INFO: Pod "pod-subpath-test-inlinevolume-j268": Phase="Pending", Reason="", readiness=false. Elapsed: 189.682175ms
Jul 18 13:02:38.024: INFO: Pod "pod-subpath-test-inlinevolume-j268": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379866357s
Jul 18 13:02:40.216: INFO: Pod "pod-subpath-test-inlinevolume-j268": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572540725s
Jul 18 13:02:42.408: INFO: Pod "pod-subpath-test-inlinevolume-j268": Phase="Pending", Reason="", readiness=false. Elapsed: 6.763860075s
Jul 18 13:02:44.602: INFO: Pod "pod-subpath-test-inlinevolume-j268": Phase="Pending", Reason="", readiness=false. Elapsed: 8.958562951s
Jul 18 13:02:46.794: INFO: Pod "pod-subpath-test-inlinevolume-j268": Phase="Pending", Reason="", readiness=false. Elapsed: 11.149768513s
Jul 18 13:02:48.985: INFO: Pod "pod-subpath-test-inlinevolume-j268": Phase="Pending", Reason="", readiness=false. Elapsed: 13.341007479s
Jul 18 13:02:51.176: INFO: Pod "pod-subpath-test-inlinevolume-j268": Phase="Pending", Reason="", readiness=false. Elapsed: 15.532196322s
Jul 18 13:02:53.366: INFO: Pod "pod-subpath-test-inlinevolume-j268": Phase="Pending", Reason="", readiness=false. Elapsed: 17.722439657s
Jul 18 13:02:55.557: INFO: Pod "pod-subpath-test-inlinevolume-j268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.91296149s
STEP: Saw pod success
Jul 18 13:02:55.557: INFO: Pod "pod-subpath-test-inlinevolume-j268" satisfied condition "Succeeded or Failed"
Jul 18 13:02:55.747: INFO: Trying to get logs from node ip-172-20-54-233.ap-southeast-2.compute.internal pod pod-subpath-test-inlinevolume-j268 container test-container-subpath-inlinevolume-j268: <nil>
STEP: delete the pod
Jul 18 13:02:56.140: INFO: Waiting for pod pod-subpath-test-inlinevolume-j268 to disappear
Jul 18 13:02:56.330: INFO: Pod pod-subpath-test-inlinevolume-j268 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-j268
Jul 18 13:02:56.330: INFO: Deleting pod "pod-subpath-test-inlinevolume-j268" in namespace "provisioning-3002"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":12,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0718 13:02:08.424528   12707 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 18 13:02:08.424: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 18 13:02:09.204: INFO: created pod
Jul 18 13:02:09.204: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-6225" to be "Succeeded or Failed"
Jul 18 13:02:09.393: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 189.399305ms
Jul 18 13:02:11.585: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381710767s
Jul 18 13:02:13.779: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575087921s
Jul 18 13:02:15.968: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.764555673s
Jul 18 13:02:18.159: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.954984423s
Jul 18 13:02:20.348: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 11.144378833s
Jul 18 13:02:22.539: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 13.334846503s
Jul 18 13:02:24.734: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 15.530588695s
Jul 18 13:02:26.924: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.720200449s
STEP: Saw pod success
Jul 18 13:02:26.924: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
Jul 18 13:02:56.926: INFO: polling logs
Jul 18 13:02:57.123: INFO: Pod logs: 
2021/07/18 13:02:24 OK: Got token
2021/07/18 13:02:24 validating with in-cluster discovery
2021/07/18 13:02:24 OK: got issuer https://api.internal.e2e-c5aa65949d-167d8.test-cncf-aws.k8s.io
2021/07/18 13:02:24 Full, not-validated claims: 
... skipping 14 lines ...
• [SLOW TEST:51.359 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 18 13:02:52.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825
STEP: creating a StorageClass
STEP: Creating a StorageClass
STEP: creating a claim object with a suffix for gluster dynamic provisioner
Jul 18 13:02:54.160: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul 18 13:03:00.539: INFO: deleting claim "volume-provisioning-1014"/"pvc-8hjzv"
... skipping 6 lines ...

• [SLOW TEST:8.465 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:824
    should report an error and create no PV
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":4,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:15.217 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:26.992 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":5,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 18 13:03:02.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2212" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":5,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 18 13:03:07.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 162 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 18 13:03:11.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-2329" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:03:11.731: INFO: Only supported for providers [gce gke] (not aws)
... skipping 92 lines ...
• [SLOW TEST:45.139 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:03:15.070: INFO: Only supported for providers [vsphere] (not aws)
... skipping 73 lines ...
• [SLOW TEST:22.474 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 18 13:03:11.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jul 18 13:03:12.885: INFO: Waiting up to 5m0s for pod "security-context-c3ca2ef7-8faa-4c3b-8968-58b3e419776a" in namespace "security-context-2297" to be "Succeeded or Failed"
Jul 18 13:03:13.074: INFO: Pod "security-context-c3ca2ef7-8faa-4c3b-8968-58b3e419776a": Phase="Pending", Reason="", readiness=false. Elapsed: 188.693027ms
Jul 18 13:03:15.266: INFO: Pod "security-context-c3ca2ef7-8faa-4c3b-8968-58b3e419776a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.380435757s
STEP: Saw pod success
Jul 18 13:03:15.266: INFO: Pod "security-context-c3ca2ef7-8faa-4c3b-8968-58b3e419776a" satisfied condition "Succeeded or Failed"
Jul 18 13:03:15.464: INFO: Trying to get logs from node ip-172-20-58-3.ap-southeast-2.compute.internal pod security-context-c3ca2ef7-8faa-4c3b-8968-58b3e419776a container test-container: <nil>
STEP: delete the pod
Jul 18 13:03:15.846: INFO: Waiting for pod security-context-c3ca2ef7-8faa-4c3b-8968-58b3e419776a to disappear
Jul 18 13:03:16.035: INFO: Pod security-context-c3ca2ef7-8faa-4c3b-8968-58b3e419776a no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 18 13:03:16.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-2297" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":3,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:03:16.436: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 18 13:03:17.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4526" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":4,"skipped":38,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:03:17.808: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:03:18.551: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
Jul 18 13:03:08.683: INFO: PersistentVolumeClaim pvc-6gnw9 found but phase is Pending instead of Bound.
Jul 18 13:03:10.873: INFO: PersistentVolumeClaim pvc-6gnw9 found and phase=Bound (8.954885584s)
Jul 18 13:03:10.873: INFO: Waiting up to 3m0s for PersistentVolume local-8w65j to have phase Bound
Jul 18 13:03:11.062: INFO: PersistentVolume local-8w65j found and phase=Bound (188.848559ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-d9qz
STEP: Creating a pod to test subpath
Jul 18 13:03:11.631: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-d9qz" in namespace "provisioning-5535" to be "Succeeded or Failed"
Jul 18 13:03:11.820: INFO: Pod "pod-subpath-test-preprovisionedpv-d9qz": Phase="Pending", Reason="", readiness=false. Elapsed: 188.747077ms
Jul 18 13:03:14.010: INFO: Pod "pod-subpath-test-preprovisionedpv-d9qz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379502395s
Jul 18 13:03:16.200: INFO: Pod "pod-subpath-test-preprovisionedpv-d9qz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569340408s
Jul 18 13:03:18.390: INFO: Pod "pod-subpath-test-preprovisionedpv-d9qz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.759241958s
STEP: Saw pod success
Jul 18 13:03:18.390: INFO: Pod "pod-subpath-test-preprovisionedpv-d9qz" satisfied condition "Succeeded or Failed"
Jul 18 13:03:18.580: INFO: Trying to get logs from node ip-172-20-41-66.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-d9qz container test-container-subpath-preprovisionedpv-d9qz: <nil>
STEP: delete the pod
Jul 18 13:03:18.968: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-d9qz to disappear
Jul 18 13:03:19.157: INFO: Pod pod-subpath-test-preprovisionedpv-d9qz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-d9qz
Jul 18 13:03:19.157: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-d9qz" in namespace "provisioning-5535"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:03:21.753: INFO: Only supported for providers [gce gke] (not aws)
... skipping 14 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 18 13:02:32.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 215 lines ...
Jul 18 13:03:17.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 18 13:03:18.965: INFO: Waiting up to 5m0s for pod "pod-65ae2eb4-e6c8-4c33-81ef-c7963c1e449e" in namespace "emptydir-2620" to be "Succeeded or Failed"
Jul 18 13:03:19.154: INFO: Pod "pod-65ae2eb4-e6c8-4c33-81ef-c7963c1e449e": Phase="Pending", Reason="", readiness=false. Elapsed: 189.051565ms
Jul 18 13:03:21.344: INFO: Pod "pod-65ae2eb4-e6c8-4c33-81ef-c7963c1e449e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378776777s
Jul 18 13:03:23.534: INFO: Pod "pod-65ae2eb4-e6c8-4c33-81ef-c7963c1e449e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.568992693s
STEP: Saw pod success
Jul 18 13:03:23.534: INFO: Pod "pod-65ae2eb4-e6c8-4c33-81ef-c7963c1e449e" satisfied condition "Succeeded or Failed"
Jul 18 13:03:23.723: INFO: Trying to get logs from node ip-172-20-58-3.ap-southeast-2.compute.internal pod pod-65ae2eb4-e6c8-4c33-81ef-c7963c1e449e container test-container: <nil>
STEP: delete the pod
Jul 18 13:03:24.120: INFO: Waiting for pod pod-65ae2eb4-e6c8-4c33-81ef-c7963c1e449e to disappear
Jul 18 13:03:24.319: INFO: Pod pod-65ae2eb4-e6c8-4c33-81ef-c7963c1e449e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 18 13:03:24.707: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 16 lines ...