This job view page is being replaced by Spyglass soon. Check out the new job view.
PRhakman: Fix GCE resource tracking
ResultABORTED
Tests 0 failed / 0 succeeded
Started2022-06-23 09:09
Elapsed39m58s
Revisionb15a1d76f00c0388edb255f3e55169d60a7217e1
Refs 13857

No Test Failures!


Error lines from build-log.txt

... skipping 402 lines ...
Copying file:///home/prow/go/src/k8s.io/kops/.build/upload/latest-ci.txt [Content-Type=text/plain]...
/ [0 files][    0.0 B/  128.0 B]                                                
/ [1 files][  128.0 B/  128.0 B]                                                
Operation completed over 1 objects/128.0 B.                                      
I0623 09:19:02.560323    5917 copy.go:30] cp /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops /logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kops
I0623 09:19:02.749418    5917 up.go:44] Cleaning up any leaked resources from previous cluster
I0623 09:19:02.749555    5917 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops toolbox dump --name e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh1047574805/key --ssh-user prow
W0623 09:19:02.944419    5917 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0623 09:19:02.944501    5917 down.go:48] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops delete cluster --name e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local --yes
I0623 09:19:02.967070   38584 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 09:19:02.967210   38584 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local" not found
I0623 09:19:03.084229    5917 gcs.go:51] gsutil ls -b -p k8s-boskos-gce-project-09 gs://k8s-boskos-gce-project-09-state-0e
I0623 09:19:04.693662    5917 gcs.go:70] gsutil mb -p k8s-boskos-gce-project-09 gs://k8s-boskos-gce-project-09-state-0e
Creating gs://k8s-boskos-gce-project-09-state-0e/...
I0623 09:19:06.712802    5917 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/06/23 09:19:06 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0623 09:19:06.721940    5917 http.go:37] curl https://ip.jsb.workers.dev
I0623 09:19:06.812833    5917 up.go:159] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops create cluster --name e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local --cloud gce --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.24.2 --ssh-public-key /tmp/kops-ssh1047574805/key.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --channel=alpha --networking=cilium --container-runtime=containerd --gce-service-account=default --admin-access 35.239.255.119/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west4-a --master-size e2-standard-2 --project k8s-boskos-gce-project-09
I0623 09:19:06.833656   38874 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 09:19:06.833766   38874 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
I0623 09:19:06.860972   38874 create_cluster.go:862] Using SSH public key: /tmp/kops-ssh1047574805/key.pub
I0623 09:19:07.110233   38874 new_cluster.go:425] VMs will be configured to use specified Service Account: default
... skipping 395 lines ...

I0623 09:20:15.981143    5917 up.go:243] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops validate cluster --name e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local --count 10 --wait 15m0s
I0623 09:20:16.011714   38912 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 09:20:16.011839   38912 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local

W0623 09:20:46.336124   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: i/o timeout
W0623 09:20:56.382837   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: connect: connection refused
W0623 09:21:06.427404   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: connect: connection refused
W0623 09:21:16.470874   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: connect: connection refused
W0623 09:21:26.513249   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: connect: connection refused
W0623 09:21:36.558858   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: connect: connection refused
W0623 09:21:46.602053   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: connect: connection refused
W0623 09:21:56.645325   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: connect: connection refused
W0623 09:22:06.687524   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: connect: connection refused
W0623 09:22:16.730442   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: connect: connection refused
W0623 09:22:26.772878   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: connect: connection refused
W0623 09:22:36.818578   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": dial tcp 34.125.171.150:443: connect: connection refused
W0623 09:22:56.865245   38912 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.171.150/api/v1/nodes": net/http: TLS handshake timeout
I0623 09:23:12.342528   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4

... skipping 5 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/master-us-west4-a-w636	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/master-us-west4-a-w636" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-6v6c	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-6v6c" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-p9s4	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-p9s4" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-pdqm	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-pdqm" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-shvt	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-shvt" has not yet joined cluster

Validation Failed
W0623 09:23:13.246602   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:23:23.642061   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 6 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/master-us-west4-a-w636	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/master-us-west4-a-w636" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-6v6c	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-6v6c" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-p9s4	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-p9s4" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-pdqm	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-pdqm" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-shvt	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-shvt" has not yet joined cluster

Validation Failed
W0623 09:23:24.651288   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:23:35.100543   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 14 lines ...
Pod	kube-system/coredns-57d68fdf4b-zhbpm												system-cluster-critical pod "coredns-57d68fdf4b-zhbpm" is pending
Pod	kube-system/coredns-autoscaler-676759bcc8-tzhj7											system-cluster-critical pod "coredns-autoscaler-676759bcc8-tzhj7" is pending
Pod	kube-system/dns-controller-6b785dc767-kw97c											system-cluster-critical pod "dns-controller-6b785dc767-kw97c" is pending
Pod	kube-system/kops-controller-df7m2												system-cluster-critical pod "kops-controller-df7m2" is pending
Pod	kube-system/kube-scheduler-master-us-west4-a-w636										system-cluster-critical pod "kube-scheduler-master-us-west4-a-w636" is pending

Validation Failed
W0623 09:23:35.984348   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:23:46.351135   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 13 lines ...
Pod	kube-system/cloud-controller-manager-2sbsd											system-cluster-critical pod "cloud-controller-manager-2sbsd" is pending
Pod	kube-system/coredns-57d68fdf4b-zhbpm												system-cluster-critical pod "coredns-57d68fdf4b-zhbpm" is pending
Pod	kube-system/coredns-autoscaler-676759bcc8-tzhj7											system-cluster-critical pod "coredns-autoscaler-676759bcc8-tzhj7" is pending
Pod	kube-system/dns-controller-6b785dc767-kw97c											system-cluster-critical pod "dns-controller-6b785dc767-kw97c" is pending
Pod	kube-system/kops-controller-df7m2												system-cluster-critical pod "kops-controller-df7m2" is pending

Validation Failed
W0623 09:23:47.229957   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:23:57.627974   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 11 lines ...
Pod	kube-system/cilium-ktn2z													system-node-critical pod "cilium-ktn2z" is pending
Pod	kube-system/cilium-operator-7c9847cd56-vs477											system-cluster-critical pod "cilium-operator-7c9847cd56-vs477" is pending
Pod	kube-system/cloud-controller-manager-2sbsd											system-cluster-critical pod "cloud-controller-manager-2sbsd" is pending
Pod	kube-system/coredns-57d68fdf4b-zhbpm												system-cluster-critical pod "coredns-57d68fdf4b-zhbpm" is pending
Pod	kube-system/coredns-autoscaler-676759bcc8-tzhj7											system-cluster-critical pod "coredns-autoscaler-676759bcc8-tzhj7" is pending

Validation Failed
W0623 09:23:58.448164   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:24:08.751452   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 10 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-shvt	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-shvt" has not yet joined cluster
Node	master-us-west4-a-w636														node "master-us-west4-a-w636" of role "master" is not ready
Pod	kube-system/cilium-ktn2z													system-node-critical pod "cilium-ktn2z" is pending
Pod	kube-system/coredns-57d68fdf4b-zhbpm												system-cluster-critical pod "coredns-57d68fdf4b-zhbpm" is pending
Pod	kube-system/coredns-autoscaler-676759bcc8-tzhj7											system-cluster-critical pod "coredns-autoscaler-676759bcc8-tzhj7" is pending

Validation Failed
W0623 09:24:09.614838   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:24:19.960561   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 12 lines ...
Pod	kube-system/cilium-ktn2z													system-node-critical pod "cilium-ktn2z" is pending
Pod	kube-system/coredns-57d68fdf4b-zhbpm												system-cluster-critical pod "coredns-57d68fdf4b-zhbpm" is pending
Pod	kube-system/coredns-autoscaler-676759bcc8-tzhj7											system-cluster-critical pod "coredns-autoscaler-676759bcc8-tzhj7" is pending
Pod	kube-system/kube-apiserver-master-us-west4-a-w636										system-cluster-critical pod "kube-apiserver-master-us-west4-a-w636" is pending
Pod	kube-system/metadata-proxy-v0.12-956vc												system-node-critical pod "metadata-proxy-v0.12-956vc" is pending

Validation Failed
W0623 09:24:20.814562   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:24:31.366100   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 22 lines ...
Pod	kube-system/coredns-autoscaler-676759bcc8-tzhj7	system-cluster-critical pod "coredns-autoscaler-676759bcc8-tzhj7" is pending
Pod	kube-system/metadata-proxy-v0.12-4l5hr		system-node-critical pod "metadata-proxy-v0.12-4l5hr" is pending
Pod	kube-system/metadata-proxy-v0.12-68rk4		system-node-critical pod "metadata-proxy-v0.12-68rk4" is pending
Pod	kube-system/metadata-proxy-v0.12-956vc		system-node-critical pod "metadata-proxy-v0.12-956vc" is pending
Pod	kube-system/metadata-proxy-v0.12-qbtx5		system-node-critical pod "metadata-proxy-v0.12-qbtx5" is pending

Validation Failed
W0623 09:24:32.236315   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:24:42.674581   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 22 lines ...
Pod	kube-system/coredns-autoscaler-676759bcc8-tzhj7	system-cluster-critical pod "coredns-autoscaler-676759bcc8-tzhj7" is pending
Pod	kube-system/metadata-proxy-v0.12-4l5hr		system-node-critical pod "metadata-proxy-v0.12-4l5hr" is pending
Pod	kube-system/metadata-proxy-v0.12-68rk4		system-node-critical pod "metadata-proxy-v0.12-68rk4" is pending
Pod	kube-system/metadata-proxy-v0.12-l757h		system-node-critical pod "metadata-proxy-v0.12-l757h" is pending
Pod	kube-system/metadata-proxy-v0.12-qbtx5		system-node-critical pod "metadata-proxy-v0.12-qbtx5" is pending

Validation Failed
W0623 09:24:43.502353   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:24:53.859249   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 17 lines ...
Pod	kube-system/cilium-vfl8x			system-node-critical pod "cilium-vfl8x" is pending
Pod	kube-system/coredns-57d68fdf4b-zhbpm		system-cluster-critical pod "coredns-57d68fdf4b-zhbpm" is pending
Pod	kube-system/coredns-autoscaler-676759bcc8-tzhj7	system-cluster-critical pod "coredns-autoscaler-676759bcc8-tzhj7" is pending
Pod	kube-system/metadata-proxy-v0.12-4l5hr		system-node-critical pod "metadata-proxy-v0.12-4l5hr" is pending
Pod	kube-system/metadata-proxy-v0.12-l757h		system-node-critical pod "metadata-proxy-v0.12-l757h" is pending

Validation Failed
W0623 09:24:54.753798   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:25:05.049449   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 13 lines ...
Pod	kube-system/cilium-kqqc8			system-node-critical pod "cilium-kqqc8" is not ready (cilium-agent)
Pod	kube-system/cilium-vfl8x			system-node-critical pod "cilium-vfl8x" is not ready (cilium-agent)
Pod	kube-system/coredns-57d68fdf4b-zhbpm		system-cluster-critical pod "coredns-57d68fdf4b-zhbpm" is pending
Pod	kube-system/coredns-autoscaler-676759bcc8-tzhj7	system-cluster-critical pod "coredns-autoscaler-676759bcc8-tzhj7" is pending
Pod	kube-system/metadata-proxy-v0.12-l757h		system-node-critical pod "metadata-proxy-v0.12-l757h" is pending

Validation Failed
W0623 09:25:05.875740   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:25:16.254543   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 8 lines ...

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/cilium-vfl8x		system-node-critical pod "cilium-vfl8x" is not ready (cilium-agent)
Pod	kube-system/coredns-57d68fdf4b-cvm77	system-cluster-critical pod "coredns-57d68fdf4b-cvm77" is pending

Validation Failed
W0623 09:25:17.099452   38912 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 09:25:27.515300   38912 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west4-a	Master	e2-standard-2	1	1	us-west4
nodes-us-west4-a	Node	n1-standard-2	4	4	us-west4
... skipping 183 lines ...
===================================
Random Seed: 1655976449 - Will randomize all specs
Will run 6971 specs

Running in parallel across 25 nodes

Jun 23 09:27:47.023: INFO: lookupDiskImageSources: gcloud error with [[]string{"instance-groups", "list-instances", "", "--format=get(instance)"}]; err:exit status 1
Jun 23 09:27:47.023: INFO:  > ERROR: (gcloud.compute.instance-groups.list-instances) could not parse resource []
Jun 23 09:27:47.023: INFO:  > 
Jun 23 09:27:47.023: INFO: Cluster image sources lookup failed: exit status 1

Jun 23 09:27:47.023: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 09:27:47.025: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun 23 09:27:47.236: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun 23 09:27:47.392: INFO: 22 / 22 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jun 23 09:27:47.392: INFO: expected 5 pod replicas in namespace 'kube-system', 5 are Running and Ready.
... skipping 1200 lines ...
Jun 23 09:27:47.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
W0623 09:27:48.001756   39788 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 23 09:27:48.001: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating configMap that has name configmap-test-emptyKey-7feaa7e8-44d8-4535-8f7d-9b911bb0862c
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:188
Jun 23 09:27:48.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7944" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:27:48.601: INFO: Only supported for providers [azure] (not gce)
... skipping 64 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:27:49.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3411" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PV Protection
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 47 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:27:49.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-5870" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":1,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 92 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:27:49.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-1966" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:27:49.947: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:188

... skipping 81 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:27:50.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-305" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:27:50.355: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:188

... skipping 64 lines ...
STEP: Destroying namespace "services-6326" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_downwardapi.go:93
STEP: Creating a pod to test downward API volume plugin
Jun 23 09:27:48.214: INFO: Waiting up to 5m0s for pod "metadata-volume-125a92cd-f501-4253-87b1-2b21789b6216" in namespace "projected-8403" to be "Succeeded or Failed"
Jun 23 09:27:48.314: INFO: Pod "metadata-volume-125a92cd-f501-4253-87b1-2b21789b6216": Phase="Pending", Reason="", readiness=false. Elapsed: 100.685034ms
Jun 23 09:27:50.358: INFO: Pod "metadata-volume-125a92cd-f501-4253-87b1-2b21789b6216": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144615828s
Jun 23 09:27:52.402: INFO: Pod "metadata-volume-125a92cd-f501-4253-87b1-2b21789b6216": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187929043s
Jun 23 09:27:54.447: INFO: Pod "metadata-volume-125a92cd-f501-4253-87b1-2b21789b6216": Phase="Running", Reason="", readiness=true. Elapsed: 6.233348688s
Jun 23 09:27:56.490: INFO: Pod "metadata-volume-125a92cd-f501-4253-87b1-2b21789b6216": Phase="Running", Reason="", readiness=false. Elapsed: 8.276465336s
Jun 23 09:27:58.534: INFO: Pod "metadata-volume-125a92cd-f501-4253-87b1-2b21789b6216": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.320162815s
STEP: Saw pod success
Jun 23 09:27:58.534: INFO: Pod "metadata-volume-125a92cd-f501-4253-87b1-2b21789b6216" satisfied condition "Succeeded or Failed"
Jun 23 09:27:58.578: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod metadata-volume-125a92cd-f501-4253-87b1-2b21789b6216 container client-container: <nil>
STEP: delete the pod
Jun 23 09:27:58.698: INFO: Waiting for pod metadata-volume-125a92cd-f501-4253-87b1-2b21789b6216 to disappear
Jun 23 09:27:58.740: INFO: Pod metadata-volume-125a92cd-f501-4253-87b1-2b21789b6216 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:11.136 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_downwardapi.go:93
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:27:58.909: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
Jun 23 09:27:47.951: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  test/e2e/node/security_context.go:185
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 23 09:27:48.227: INFO: Waiting up to 5m0s for pod "security-context-20e33e42-353c-444a-93cc-49ad42059367" in namespace "security-context-2745" to be "Succeeded or Failed"
Jun 23 09:27:48.330: INFO: Pod "security-context-20e33e42-353c-444a-93cc-49ad42059367": Phase="Pending", Reason="", readiness=false. Elapsed: 102.877718ms
Jun 23 09:27:50.377: INFO: Pod "security-context-20e33e42-353c-444a-93cc-49ad42059367": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149658356s
Jun 23 09:27:52.424: INFO: Pod "security-context-20e33e42-353c-444a-93cc-49ad42059367": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196660674s
Jun 23 09:27:54.473: INFO: Pod "security-context-20e33e42-353c-444a-93cc-49ad42059367": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245836205s
Jun 23 09:27:56.520: INFO: Pod "security-context-20e33e42-353c-444a-93cc-49ad42059367": Phase="Running", Reason="", readiness=false. Elapsed: 8.292793192s
Jun 23 09:27:58.567: INFO: Pod "security-context-20e33e42-353c-444a-93cc-49ad42059367": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.339757888s
STEP: Saw pod success
Jun 23 09:27:58.567: INFO: Pod "security-context-20e33e42-353c-444a-93cc-49ad42059367" satisfied condition "Succeeded or Failed"
Jun 23 09:27:58.613: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod security-context-20e33e42-353c-444a-93cc-49ad42059367 container test-container: <nil>
STEP: delete the pod
Jun 23 09:27:58.753: INFO: Waiting for pod security-context-20e33e42-353c-444a-93cc-49ad42059367 to disappear
Jun 23 09:27:58.800: INFO: Pod security-context-20e33e42-353c-444a-93cc-49ad42059367 no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:188
... skipping 6 lines ...
test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  test/e2e/node/security_context.go:185
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Events
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 38 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating configMap with name configmap-test-volume-b6e442cf-2d09-4008-b4c0-b02da1787c63
STEP: Creating a pod to test consume configMaps
Jun 23 09:27:48.225: INFO: Waiting up to 5m0s for pod "pod-configmaps-177553ed-c5aa-4900-b816-eb2b6b7f83b7" in namespace "configmap-2613" to be "Succeeded or Failed"
Jun 23 09:27:48.312: INFO: Pod "pod-configmaps-177553ed-c5aa-4900-b816-eb2b6b7f83b7": Phase="Pending", Reason="", readiness=false. Elapsed: 86.856889ms
Jun 23 09:27:50.358: INFO: Pod "pod-configmaps-177553ed-c5aa-4900-b816-eb2b6b7f83b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132630654s
Jun 23 09:27:52.404: INFO: Pod "pod-configmaps-177553ed-c5aa-4900-b816-eb2b6b7f83b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178768724s
Jun 23 09:27:54.448: INFO: Pod "pod-configmaps-177553ed-c5aa-4900-b816-eb2b6b7f83b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.222791407s
Jun 23 09:27:56.493: INFO: Pod "pod-configmaps-177553ed-c5aa-4900-b816-eb2b6b7f83b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.268054659s
Jun 23 09:27:58.537: INFO: Pod "pod-configmaps-177553ed-c5aa-4900-b816-eb2b6b7f83b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.312554733s
STEP: Saw pod success
Jun 23 09:27:58.538: INFO: Pod "pod-configmaps-177553ed-c5aa-4900-b816-eb2b6b7f83b7" satisfied condition "Succeeded or Failed"
Jun 23 09:27:58.582: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-configmaps-177553ed-c5aa-4900-b816-eb2b6b7f83b7 container configmap-volume-test: <nil>
STEP: delete the pod
Jun 23 09:27:59.030: INFO: Waiting for pod pod-configmaps-177553ed-c5aa-4900-b816-eb2b6b7f83b7 to disappear
Jun 23 09:27:59.077: INFO: Pod pod-configmaps-177553ed-c5aa-4900-b816-eb2b6b7f83b7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:11.503 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:27:59.254: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 71 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run with an image specified user ID
  test/e2e/common/node/security_context.go:153
Jun 23 09:27:48.261: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-655" to be "Succeeded or Failed"
Jun 23 09:27:48.341: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 79.736489ms
Jun 23 09:27:50.386: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1247063s
Jun 23 09:27:52.432: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170392913s
Jun 23 09:27:54.476: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214247876s
Jun 23 09:27:56.520: INFO: Pod "implicit-nonroot-uid": Phase="Running", Reason="", readiness=true. Elapsed: 8.258186324s
Jun 23 09:27:58.565: INFO: Pod "implicit-nonroot-uid": Phase="Running", Reason="", readiness=false. Elapsed: 10.303670676s
Jun 23 09:28:00.628: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.366590147s
Jun 23 09:28:00.628: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:188
Jun 23 09:28:00.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-655" for this suite.


... skipping 2 lines ...
test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  test/e2e/common/node/security_context.go:106
    should run with an image specified user ID
    test/e2e/common/node/security_context.go:153
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/configmap_volume.go:61
STEP: Creating configMap with name configmap-test-volume-1aefa80b-0a17-4ea0-869d-3d4e13df3366
STEP: Creating a pod to test consume configMaps
Jun 23 09:27:48.141: INFO: Waiting up to 5m0s for pod "pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460" in namespace "configmap-5479" to be "Succeeded or Failed"
Jun 23 09:27:48.226: INFO: Pod "pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460": Phase="Pending", Reason="", readiness=false. Elapsed: 84.982337ms
Jun 23 09:27:50.277: INFO: Pod "pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136065948s
Jun 23 09:27:52.324: INFO: Pod "pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182259239s
Jun 23 09:27:54.368: INFO: Pod "pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460": Phase="Pending", Reason="", readiness=false. Elapsed: 6.226396304s
Jun 23 09:27:56.413: INFO: Pod "pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460": Phase="Running", Reason="", readiness=false. Elapsed: 8.27157354s
Jun 23 09:27:58.459: INFO: Pod "pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460": Phase="Running", Reason="", readiness=false. Elapsed: 10.317604353s
Jun 23 09:28:00.502: INFO: Pod "pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.360548898s
STEP: Saw pod success
Jun 23 09:28:00.502: INFO: Pod "pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460" satisfied condition "Succeeded or Failed"
Jun 23 09:28:00.555: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 09:28:00.693: INFO: Waiting for pod pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460 to disappear
Jun 23 09:28:00.739: INFO: Pod pod-configmaps-55ab63d9-01e3-4c39-ad58-46af21229460 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:13.218 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/configmap_volume.go:61
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:00.908: INFO: Only supported for providers [aws] (not gce)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  test/e2e/common/node/security_context.go:101
Jun 23 09:27:50.003: INFO: Waiting up to 5m0s for pod "busybox-user-0-d40b2a3b-22a1-47f7-a0f4-68548b7d584a" in namespace "security-context-test-1618" to be "Succeeded or Failed"
Jun 23 09:27:50.047: INFO: Pod "busybox-user-0-d40b2a3b-22a1-47f7-a0f4-68548b7d584a": Phase="Pending", Reason="", readiness=false. Elapsed: 43.417188ms
Jun 23 09:27:52.092: INFO: Pod "busybox-user-0-d40b2a3b-22a1-47f7-a0f4-68548b7d584a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088425175s
Jun 23 09:27:54.137: INFO: Pod "busybox-user-0-d40b2a3b-22a1-47f7-a0f4-68548b7d584a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133510368s
Jun 23 09:27:56.183: INFO: Pod "busybox-user-0-d40b2a3b-22a1-47f7-a0f4-68548b7d584a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179195572s
Jun 23 09:27:58.240: INFO: Pod "busybox-user-0-d40b2a3b-22a1-47f7-a0f4-68548b7d584a": Phase="Running", Reason="", readiness=true. Elapsed: 8.236288384s
Jun 23 09:28:00.284: INFO: Pod "busybox-user-0-d40b2a3b-22a1-47f7-a0f4-68548b7d584a": Phase="Running", Reason="", readiness=false. Elapsed: 10.28053362s
Jun 23 09:28:02.327: INFO: Pod "busybox-user-0-d40b2a3b-22a1-47f7-a0f4-68548b7d584a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.324096441s
Jun 23 09:28:02.327: INFO: Pod "busybox-user-0-d40b2a3b-22a1-47f7-a0f4-68548b7d584a" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:188
Jun 23 09:28:02.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1618" for this suite.


... skipping 2 lines ...
test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  test/e2e/common/node/security_context.go:52
    should run the container with uid 0 [LinuxOnly] [NodeConformance]
    test/e2e/common/node/security_context.go:101
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:02.447: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/framework/framework.go:188

... skipping 292 lines ...
• [SLOW TEST:14.984 seconds]
[sig-apps] ReplicationController
test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:02.783: INFO: Only supported for providers [azure] (not gce)
... skipping 37 lines ...
      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:1576
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:27:49.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  test/e2e/apps/deployment.go:138
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":2,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:02.823: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/framework/framework.go:188

... skipping 54 lines ...
STEP: Destroying namespace "apply-2773" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  test/e2e/apimachinery/apply.go:59

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":2,"skipped":19,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:27:48.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 67 lines ...
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:240
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:03.628: INFO: Only supported for providers [aws] (not gce)
... skipping 44 lines ...
• [SLOW TEST:16.961 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a private image
  test/e2e/apps/replica_set.go:115
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a private image","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:04.776: INFO: Only supported for providers [azure] (not gce)
... skipping 24 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun 23 09:27:59.302: INFO: Waiting up to 5m0s for pod "pod-f4ba38ab-010f-4002-a52f-25d1971e85e8" in namespace "emptydir-730" to be "Succeeded or Failed"
Jun 23 09:27:59.345: INFO: Pod "pod-f4ba38ab-010f-4002-a52f-25d1971e85e8": Phase="Pending", Reason="", readiness=false. Elapsed: 43.569728ms
Jun 23 09:28:01.388: INFO: Pod "pod-f4ba38ab-010f-4002-a52f-25d1971e85e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086362248s
Jun 23 09:28:03.433: INFO: Pod "pod-f4ba38ab-010f-4002-a52f-25d1971e85e8": Phase="Running", Reason="", readiness=false. Elapsed: 4.131400412s
Jun 23 09:28:05.481: INFO: Pod "pod-f4ba38ab-010f-4002-a52f-25d1971e85e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.179703191s
STEP: Saw pod success
Jun 23 09:28:05.481: INFO: Pod "pod-f4ba38ab-010f-4002-a52f-25d1971e85e8" satisfied condition "Succeeded or Failed"
Jun 23 09:28:05.525: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-f4ba38ab-010f-4002-a52f-25d1971e85e8 container test-container: <nil>
STEP: delete the pod
Jun 23 09:28:05.622: INFO: Waiting for pod pod-f4ba38ab-010f-4002-a52f-25d1971e85e8 to disappear
Jun 23 09:28:05.665: INFO: Pod pod-f4ba38ab-010f-4002-a52f-25d1971e85e8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:6.807 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:05.783: INFO: Only supported for providers [aws] (not gce)
... skipping 14 lines ...
      Only supported for providers [aws] (not gce)

      test/e2e/storage/drivers/in_tree.go:1720
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running ","total":-1,"completed":1,"skipped":28,"failed":0}
[BeforeEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:27:59.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  test/e2e/common/node/security_context.go:219
Jun 23 09:27:59.545: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-fe81535d-54f4-4bcd-a4a7-43a5d36214f0" in namespace "security-context-test-924" to be "Succeeded or Failed"
Jun 23 09:27:59.587: INFO: Pod "busybox-readonly-true-fe81535d-54f4-4bcd-a4a7-43a5d36214f0": Phase="Pending", Reason="", readiness=false. Elapsed: 42.033945ms
Jun 23 09:28:01.636: INFO: Pod "busybox-readonly-true-fe81535d-54f4-4bcd-a4a7-43a5d36214f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091285451s
Jun 23 09:28:03.683: INFO: Pod "busybox-readonly-true-fe81535d-54f4-4bcd-a4a7-43a5d36214f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137894817s
Jun 23 09:28:05.725: INFO: Pod "busybox-readonly-true-fe81535d-54f4-4bcd-a4a7-43a5d36214f0": Phase="Failed", Reason="", readiness=false. Elapsed: 6.180466169s
Jun 23 09:28:05.725: INFO: Pod "busybox-readonly-true-fe81535d-54f4-4bcd-a4a7-43a5d36214f0" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:188
Jun 23 09:28:05.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-924" for this suite.


... skipping 2 lines ...
test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  test/e2e/common/node/security_context.go:173
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    test/e2e/common/node/security_context.go:219
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for API chunking
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 80 lines ...
• [SLOW TEST:21.211 seconds]
[sig-api-machinery] Servers with support for API chunking
test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  test/e2e/apimachinery/chunking.go:79
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  test/e2e/common/node/sysctl.go:37
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:28:08.769: INFO: >>> kubeConfig: /root/.kube/config
... skipping 43 lines ...
• [SLOW TEST:21.565 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for the cluster [Provider:GCE]
  test/e2e/network/dns.go:70
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Provider:GCE]","total":-1,"completed":1,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:09.406: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/framework/framework.go:188

... skipping 93 lines ...
Jun 23 09:28:00.015: INFO: PersistentVolumeClaim pvc-b59d7 found but phase is Pending instead of Bound.
Jun 23 09:28:02.063: INFO: PersistentVolumeClaim pvc-b59d7 found and phase=Bound (2.092928439s)
Jun 23 09:28:02.063: INFO: Waiting up to 3m0s for PersistentVolume local-2krbx to have phase Bound
Jun 23 09:28:02.108: INFO: PersistentVolume local-2krbx found and phase=Bound (45.550443ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-j6z2
STEP: Creating a pod to test subpath
Jun 23 09:28:02.251: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j6z2" in namespace "provisioning-3215" to be "Succeeded or Failed"
Jun 23 09:28:02.298: INFO: Pod "pod-subpath-test-preprovisionedpv-j6z2": Phase="Pending", Reason="", readiness=false. Elapsed: 47.090376ms
Jun 23 09:28:04.361: INFO: Pod "pod-subpath-test-preprovisionedpv-j6z2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109817043s
Jun 23 09:28:06.409: INFO: Pod "pod-subpath-test-preprovisionedpv-j6z2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157597584s
Jun 23 09:28:08.465: INFO: Pod "pod-subpath-test-preprovisionedpv-j6z2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213676471s
Jun 23 09:28:10.518: INFO: Pod "pod-subpath-test-preprovisionedpv-j6z2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.266667446s
STEP: Saw pod success
Jun 23 09:28:10.518: INFO: Pod "pod-subpath-test-preprovisionedpv-j6z2" satisfied condition "Succeeded or Failed"
Jun 23 09:28:10.566: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-preprovisionedpv-j6z2 container test-container-volume-preprovisionedpv-j6z2: <nil>
STEP: delete the pod
Jun 23 09:28:10.675: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j6z2 to disappear
Jun 23 09:28:10.722: INFO: Pod pod-subpath-test-preprovisionedpv-j6z2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j6z2
Jun 23 09:28:10.722: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j6z2" in namespace "provisioning-3215"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:11.472: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 124 lines ...
Jun 23 09:27:59.303: INFO: PersistentVolumeClaim pvc-9ch6l found but phase is Pending instead of Bound.
Jun 23 09:28:01.360: INFO: PersistentVolumeClaim pvc-9ch6l found and phase=Bound (2.10144498s)
Jun 23 09:28:01.360: INFO: Waiting up to 3m0s for PersistentVolume local-dspgc to have phase Bound
Jun 23 09:28:01.403: INFO: PersistentVolume local-dspgc found and phase=Bound (43.591356ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2sz2
STEP: Creating a pod to test subpath
Jun 23 09:28:01.551: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2sz2" in namespace "provisioning-1603" to be "Succeeded or Failed"
Jun 23 09:28:01.608: INFO: Pod "pod-subpath-test-preprovisionedpv-2sz2": Phase="Pending", Reason="", readiness=false. Elapsed: 57.639226ms
Jun 23 09:28:03.655: INFO: Pod "pod-subpath-test-preprovisionedpv-2sz2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103972252s
Jun 23 09:28:05.700: INFO: Pod "pod-subpath-test-preprovisionedpv-2sz2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149096837s
Jun 23 09:28:07.745: INFO: Pod "pod-subpath-test-preprovisionedpv-2sz2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194338083s
Jun 23 09:28:09.798: INFO: Pod "pod-subpath-test-preprovisionedpv-2sz2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.246934157s
Jun 23 09:28:11.846: INFO: Pod "pod-subpath-test-preprovisionedpv-2sz2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.294941505s
STEP: Saw pod success
Jun 23 09:28:11.846: INFO: Pod "pod-subpath-test-preprovisionedpv-2sz2" satisfied condition "Succeeded or Failed"
Jun 23 09:28:11.890: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-subpath-test-preprovisionedpv-2sz2 container test-container-subpath-preprovisionedpv-2sz2: <nil>
STEP: delete the pod
Jun 23 09:28:12.028: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2sz2 to disappear
Jun 23 09:28:12.148: INFO: Pod pod-subpath-test-preprovisionedpv-2sz2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2sz2
Jun 23 09:28:12.148: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2sz2" in namespace "provisioning-1603"
... skipping 30 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:382
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:8.944 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for pods for Hostname [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:13.792: INFO: Only supported for providers [azure] (not gce)
... skipping 71 lines ...
Jun 23 09:27:59.530: INFO: PersistentVolumeClaim pvc-4k8d9 found but phase is Pending instead of Bound.
Jun 23 09:28:01.605: INFO: PersistentVolumeClaim pvc-4k8d9 found and phase=Bound (6.219765232s)
Jun 23 09:28:01.606: INFO: Waiting up to 3m0s for PersistentVolume local-w5447 to have phase Bound
Jun 23 09:28:01.656: INFO: PersistentVolume local-w5447 found and phase=Bound (50.978176ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rm5j
STEP: Creating a pod to test subpath
Jun 23 09:28:01.815: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rm5j" in namespace "provisioning-314" to be "Succeeded or Failed"
Jun 23 09:28:01.861: INFO: Pod "pod-subpath-test-preprovisionedpv-rm5j": Phase="Pending", Reason="", readiness=false. Elapsed: 46.150788ms
Jun 23 09:28:03.907: INFO: Pod "pod-subpath-test-preprovisionedpv-rm5j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092113601s
Jun 23 09:28:05.955: INFO: Pod "pod-subpath-test-preprovisionedpv-rm5j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140005688s
Jun 23 09:28:08.007: INFO: Pod "pod-subpath-test-preprovisionedpv-rm5j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192340546s
Jun 23 09:28:10.056: INFO: Pod "pod-subpath-test-preprovisionedpv-rm5j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241512971s
Jun 23 09:28:12.150: INFO: Pod "pod-subpath-test-preprovisionedpv-rm5j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.33516219s
STEP: Saw pod success
Jun 23 09:28:12.150: INFO: Pod "pod-subpath-test-preprovisionedpv-rm5j" satisfied condition "Succeeded or Failed"
Jun 23 09:28:12.210: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-subpath-test-preprovisionedpv-rm5j container test-container-subpath-preprovisionedpv-rm5j: <nil>
STEP: delete the pod
Jun 23 09:28:12.442: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rm5j to disappear
Jun 23 09:28:12.490: INFO: Pod pod-subpath-test-preprovisionedpv-rm5j no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rm5j
Jun 23 09:28:12.490: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rm5j" in namespace "provisioning-314"
... skipping 30 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:221
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:13.940: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating secret with name secret-test-c22fc161-e9a1-4d1e-b8d7-56ce37a7c00e
STEP: Creating a pod to test consume secrets
Jun 23 09:28:06.272: INFO: Waiting up to 5m0s for pod "pod-secrets-e09892e6-2551-4a54-a2d9-960f772b9c79" in namespace "secrets-4898" to be "Succeeded or Failed"
Jun 23 09:28:06.315: INFO: Pod "pod-secrets-e09892e6-2551-4a54-a2d9-960f772b9c79": Phase="Pending", Reason="", readiness=false. Elapsed: 43.084948ms
Jun 23 09:28:08.372: INFO: Pod "pod-secrets-e09892e6-2551-4a54-a2d9-960f772b9c79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099319786s
Jun 23 09:28:10.418: INFO: Pod "pod-secrets-e09892e6-2551-4a54-a2d9-960f772b9c79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14589002s
Jun 23 09:28:12.468: INFO: Pod "pod-secrets-e09892e6-2551-4a54-a2d9-960f772b9c79": Phase="Running", Reason="", readiness=true. Elapsed: 6.195519598s
Jun 23 09:28:14.516: INFO: Pod "pod-secrets-e09892e6-2551-4a54-a2d9-960f772b9c79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.243871916s
STEP: Saw pod success
Jun 23 09:28:14.516: INFO: Pod "pod-secrets-e09892e6-2551-4a54-a2d9-960f772b9c79" satisfied condition "Succeeded or Failed"
Jun 23 09:28:14.572: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-secrets-e09892e6-2551-4a54-a2d9-960f772b9c79 container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 09:28:14.703: INFO: Waiting for pod pod-secrets-e09892e6-2551-4a54-a2d9-960f772b9c79 to disappear
Jun 23 09:28:14.758: INFO: Pod pod-secrets-e09892e6-2551-4a54-a2d9-960f772b9c79 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:8.991 seconds]
[sig-storage] Secrets
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 192 lines ...
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jun 23 09:28:15.667: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
... skipping 9 lines ...
  test/e2e/common/node/runtime.go:43
    on terminated container
    test/e2e/common/node/runtime.go:136
      should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:15.915: INFO: Only supported for providers [openstack] (not gce)
... skipping 131 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:380
    should support exec through an HTTP proxy
    test/e2e/kubectl/kubectl.go:440
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":1,"skipped":11,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:16.219: INFO: Driver local doesn't support ext3 -- skipping
... skipping 40 lines ...
STEP: updating the pod
Jun 23 09:28:10.447: INFO: Successfully updated pod "pod-update-activedeadlineseconds-99b1c7a4-5a6c-4b3a-86ad-0563160482b6"
Jun 23 09:28:10.447: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-99b1c7a4-5a6c-4b3a-86ad-0563160482b6" in namespace "pods-3102" to be "terminated due to deadline exceeded"
Jun 23 09:28:10.491: INFO: Pod "pod-update-activedeadlineseconds-99b1c7a4-5a6c-4b3a-86ad-0563160482b6": Phase="Running", Reason="", readiness=true. Elapsed: 43.754627ms
Jun 23 09:28:12.535: INFO: Pod "pod-update-activedeadlineseconds-99b1c7a4-5a6c-4b3a-86ad-0563160482b6": Phase="Running", Reason="", readiness=true. Elapsed: 2.08827472s
Jun 23 09:28:14.586: INFO: Pod "pod-update-activedeadlineseconds-99b1c7a4-5a6c-4b3a-86ad-0563160482b6": Phase="Running", Reason="", readiness=true. Elapsed: 4.139190329s
Jun 23 09:28:16.633: INFO: Pod "pod-update-activedeadlineseconds-99b1c7a4-5a6c-4b3a-86ad-0563160482b6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.185668477s
Jun 23 09:28:16.633: INFO: Pod "pod-update-activedeadlineseconds-99b1c7a4-5a6c-4b3a-86ad-0563160482b6" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  test/e2e/framework/framework.go:188
Jun 23 09:28:16.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3102" for this suite.


• [SLOW TEST:17.414 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":20,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:16.782: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
... skipping 58 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:28:16.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-3718" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":4,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:16.927: INFO: Only supported for providers [azure] (not gce)
... skipping 121 lines ...
Jun 23 09:28:00.438: INFO: PersistentVolumeClaim pvc-4k5tl found but phase is Pending instead of Bound.
Jun 23 09:28:02.484: INFO: PersistentVolumeClaim pvc-4k5tl found and phase=Bound (2.090840163s)
Jun 23 09:28:02.484: INFO: Waiting up to 3m0s for PersistentVolume local-4rd8d to have phase Bound
Jun 23 09:28:02.531: INFO: PersistentVolume local-4rd8d found and phase=Bound (47.426797ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gwkv
STEP: Creating a pod to test subpath
Jun 23 09:28:02.674: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gwkv" in namespace "provisioning-7397" to be "Succeeded or Failed"
Jun 23 09:28:02.720: INFO: Pod "pod-subpath-test-preprovisionedpv-gwkv": Phase="Pending", Reason="", readiness=false. Elapsed: 46.246568ms
Jun 23 09:28:04.767: INFO: Pod "pod-subpath-test-preprovisionedpv-gwkv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093056515s
Jun 23 09:28:06.815: INFO: Pod "pod-subpath-test-preprovisionedpv-gwkv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140879229s
Jun 23 09:28:08.860: INFO: Pod "pod-subpath-test-preprovisionedpv-gwkv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186594499s
Jun 23 09:28:10.909: INFO: Pod "pod-subpath-test-preprovisionedpv-gwkv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.23536619s
Jun 23 09:28:12.957: INFO: Pod "pod-subpath-test-preprovisionedpv-gwkv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.283077256s
Jun 23 09:28:15.006: INFO: Pod "pod-subpath-test-preprovisionedpv-gwkv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.332646341s
STEP: Saw pod success
Jun 23 09:28:15.006: INFO: Pod "pod-subpath-test-preprovisionedpv-gwkv" satisfied condition "Succeeded or Failed"
Jun 23 09:28:15.059: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-preprovisionedpv-gwkv container test-container-volume-preprovisionedpv-gwkv: <nil>
STEP: delete the pod
Jun 23 09:28:15.195: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gwkv to disappear
Jun 23 09:28:15.242: INFO: Pod pod-subpath-test-preprovisionedpv-gwkv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gwkv
Jun 23 09:28:15.242: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gwkv" in namespace "provisioning-7397"
... skipping 34 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      test/e2e/storage/testsuites/subpath.go:207
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:18.120: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/framework/framework.go:188

... skipping 56 lines ...
Jun 23 09:28:18.355: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:28:20.372: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Jun 23 09:28:20.422: INFO: Running '/logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=sctp-9233 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Jun 23 09:28:20.968: INFO: rc: 7
Jun 23 09:28:21.024: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jun 23 09:28:21.072: INFO: Pod kube-proxy-mode-detector no longer exists
Jun 23 09:28:21.072: INFO: Couldn't detect KubeProxy mode - skip, error running /logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=sctp-9233 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7
[AfterEach] [sig-network] SCTP [LinuxOnly]
  test/e2e/framework/framework.go:188
Jun 23 09:28:21.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sctp-9233" for this suite.


S [SKIPPING] [11.297 seconds]
[sig-network] SCTP [LinuxOnly]
test/e2e/network/common/framework.go:23
  should create a ClusterIP Service with SCTP ports [It]
  test/e2e/network/service.go:4178

  Couldn't detect KubeProxy mode - skip, error running /logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=sctp-9233 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
  Command stdout:
  
  stderr:
  + curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
  command terminated with exit code 7
  
  error:
  exit status 7

  test/e2e/network/service.go:4181
------------------------------
SSSSS
------------------------------
... skipping 17 lines ...
      Only supported for providers [vsphere] (not gce)

      test/e2e/storage/drivers/in_tree.go:1438
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:28:09.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test emptydir volume type on node default medium
Jun 23 09:28:09.579: INFO: Waiting up to 5m0s for pod "pod-316eed56-4e70-4746-9974-7fef2da05ed0" in namespace "emptydir-5395" to be "Succeeded or Failed"
Jun 23 09:28:09.621: INFO: Pod "pod-316eed56-4e70-4746-9974-7fef2da05ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 42.72369ms
Jun 23 09:28:11.665: INFO: Pod "pod-316eed56-4e70-4746-9974-7fef2da05ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086069072s
Jun 23 09:28:13.708: INFO: Pod "pod-316eed56-4e70-4746-9974-7fef2da05ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129096158s
Jun 23 09:28:15.760: INFO: Pod "pod-316eed56-4e70-4746-9974-7fef2da05ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181869944s
Jun 23 09:28:17.808: INFO: Pod "pod-316eed56-4e70-4746-9974-7fef2da05ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228969272s
Jun 23 09:28:19.853: INFO: Pod "pod-316eed56-4e70-4746-9974-7fef2da05ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.273987644s
Jun 23 09:28:21.896: INFO: Pod "pod-316eed56-4e70-4746-9974-7fef2da05ed0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.317375016s
STEP: Saw pod success
Jun 23 09:28:21.896: INFO: Pod "pod-316eed56-4e70-4746-9974-7fef2da05ed0" satisfied condition "Succeeded or Failed"
Jun 23 09:28:21.942: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-316eed56-4e70-4746-9974-7fef2da05ed0 container test-container: <nil>
STEP: delete the pod
Jun 23 09:28:22.056: INFO: Waiting for pod pod-316eed56-4e70-4746-9974-7fef2da05ed0 to disappear
Jun 23 09:28:22.099: INFO: Pod pod-316eed56-4e70-4746-9974-7fef2da05ed0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:12.965 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:22.225: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 53 lines ...
Jun 23 09:28:15.797: INFO: PersistentVolumeClaim pvc-5pc9h found but phase is Pending instead of Bound.
Jun 23 09:28:17.840: INFO: PersistentVolumeClaim pvc-5pc9h found and phase=Bound (12.320138049s)
Jun 23 09:28:17.840: INFO: Waiting up to 3m0s for PersistentVolume local-x8c9n to have phase Bound
Jun 23 09:28:17.884: INFO: PersistentVolume local-x8c9n found and phase=Bound (43.378214ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hqlc
STEP: Creating a pod to test subpath
Jun 23 09:28:18.020: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hqlc" in namespace "provisioning-9552" to be "Succeeded or Failed"
Jun 23 09:28:18.065: INFO: Pod "pod-subpath-test-preprovisionedpv-hqlc": Phase="Pending", Reason="", readiness=false. Elapsed: 45.163155ms
Jun 23 09:28:20.117: INFO: Pod "pod-subpath-test-preprovisionedpv-hqlc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096762748s
Jun 23 09:28:22.162: INFO: Pod "pod-subpath-test-preprovisionedpv-hqlc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14192336s
Jun 23 09:28:24.207: INFO: Pod "pod-subpath-test-preprovisionedpv-hqlc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.187118905s
STEP: Saw pod success
Jun 23 09:28:24.207: INFO: Pod "pod-subpath-test-preprovisionedpv-hqlc" satisfied condition "Succeeded or Failed"
Jun 23 09:28:24.252: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-subpath-test-preprovisionedpv-hqlc container test-container-volume-preprovisionedpv-hqlc: <nil>
STEP: delete the pod
Jun 23 09:28:24.364: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hqlc to disappear
Jun 23 09:28:24.409: INFO: Pod pod-subpath-test-preprovisionedpv-hqlc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hqlc
Jun 23 09:28:24.409: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hqlc" in namespace "provisioning-9552"
... skipping 34 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":5,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:26.192: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 38 lines ...
• [SLOW TEST:5.775 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":4,"skipped":4,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 78 lines ...
• [SLOW TEST:16.936 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  test/e2e/apps/disruption.go:289
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":4,"skipped":68,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:32.507: INFO: Only supported for providers [aws] (not gce)
... skipping 100 lines ...
• [SLOW TEST:15.933 seconds]
[sig-apps] ReplicationController
test/e2e/apps/framework.go:23
  should test the lifecycle of a ReplicationController [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":5,"skipped":63,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 33 lines ...
test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  test/e2e/common/node/lifecycle_hook.go:46
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Volume limits
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 166 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should store data
      test/e2e/storage/testsuites/volumes.go:161
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":1,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 30 lines ...
Jun 23 09:28:15.742: INFO: PersistentVolumeClaim pvc-s4fsp found but phase is Pending instead of Bound.
Jun 23 09:28:17.787: INFO: PersistentVolumeClaim pvc-s4fsp found and phase=Bound (8.22878963s)
Jun 23 09:28:17.787: INFO: Waiting up to 3m0s for PersistentVolume local-54p85 to have phase Bound
Jun 23 09:28:17.830: INFO: PersistentVolume local-54p85 found and phase=Bound (43.272674ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-s68w
STEP: Creating a pod to test subpath
Jun 23 09:28:17.972: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-s68w" in namespace "provisioning-5857" to be "Succeeded or Failed"
Jun 23 09:28:18.019: INFO: Pod "pod-subpath-test-preprovisionedpv-s68w": Phase="Pending", Reason="", readiness=false. Elapsed: 47.134306ms
Jun 23 09:28:20.066: INFO: Pod "pod-subpath-test-preprovisionedpv-s68w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094654518s
Jun 23 09:28:22.112: INFO: Pod "pod-subpath-test-preprovisionedpv-s68w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140552227s
Jun 23 09:28:24.162: INFO: Pod "pod-subpath-test-preprovisionedpv-s68w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190002531s
Jun 23 09:28:26.210: INFO: Pod "pod-subpath-test-preprovisionedpv-s68w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.238363474s
Jun 23 09:28:28.255: INFO: Pod "pod-subpath-test-preprovisionedpv-s68w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.283405851s
Jun 23 09:28:30.301: INFO: Pod "pod-subpath-test-preprovisionedpv-s68w": Phase="Pending", Reason="", readiness=false. Elapsed: 12.329035835s
Jun 23 09:28:32.347: INFO: Pod "pod-subpath-test-preprovisionedpv-s68w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.375637065s
STEP: Saw pod success
Jun 23 09:28:32.348: INFO: Pod "pod-subpath-test-preprovisionedpv-s68w" satisfied condition "Succeeded or Failed"
Jun 23 09:28:32.396: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-preprovisionedpv-s68w container test-container-subpath-preprovisionedpv-s68w: <nil>
STEP: delete the pod
Jun 23 09:28:32.496: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-s68w to disappear
Jun 23 09:28:32.542: INFO: Pod pod-subpath-test-preprovisionedpv-s68w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-s68w
Jun 23 09:28:32.542: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-s68w" in namespace "provisioning-5857"
... skipping 34 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:382
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:34.402: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 161 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:28:35.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6378" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":6,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 9 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:28:39.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2503" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":4,"skipped":40,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:39.584: INFO: Only supported for providers [vsphere] (not gce)
... skipping 61 lines ...
Jun 23 09:28:13.609: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:13.656: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:13.700: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:13.744: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:13.794: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:13.838: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:13.838: INFO: Lookups using dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7675.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7675.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local jessie_udp@dns-test-service-2.dns-7675.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7675.svc.cluster.local]

Jun 23 09:28:18.900: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:18.956: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:19.006: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:19.063: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:19.117: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:19.167: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:19.214: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:19.273: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:19.273: INFO: Lookups using dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7675.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7675.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local jessie_udp@dns-test-service-2.dns-7675.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7675.svc.cluster.local]

Jun 23 09:28:23.886: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:23.931: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:23.978: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:24.023: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:24.070: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:24.115: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:24.161: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:24.214: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:24.214: INFO: Lookups using dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7675.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7675.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local jessie_udp@dns-test-service-2.dns-7675.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7675.svc.cluster.local]

Jun 23 09:28:28.886: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:28.929: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:28.979: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:29.024: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:29.067: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:29.112: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:29.158: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:29.203: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:29.203: INFO: Lookups using dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7675.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7675.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local jessie_udp@dns-test-service-2.dns-7675.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7675.svc.cluster.local]

Jun 23 09:28:33.892: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:33.937: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:33.981: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:34.031: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:34.077: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:34.124: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:34.167: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:34.217: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7675.svc.cluster.local from pod dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77: the server could not find the requested resource (get pods dns-test-73d5bf27-1d98-4f72-9006-586651593b77)
Jun 23 09:28:34.217: INFO: Lookups using dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7675.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7675.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7675.svc.cluster.local jessie_udp@dns-test-service-2.dns-7675.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7675.svc.cluster.local]

Jun 23 09:28:39.447: INFO: DNS probes using dns-7675/dns-test-73d5bf27-1d98-4f72-9006-586651593b77 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:38.749 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:39.721: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:51
[It] volume on default medium should have the correct mode using FSGroup
  test/e2e/common/storage/empty_dir.go:72
STEP: Creating a pod to test emptydir volume type on node default medium
Jun 23 09:28:35.794: INFO: Waiting up to 5m0s for pod "pod-937d8285-b243-43bb-b95f-bfe9a183100f" in namespace "emptydir-1687" to be "Succeeded or Failed"
Jun 23 09:28:35.836: INFO: Pod "pod-937d8285-b243-43bb-b95f-bfe9a183100f": Phase="Pending", Reason="", readiness=false. Elapsed: 41.781708ms
Jun 23 09:28:37.882: INFO: Pod "pod-937d8285-b243-43bb-b95f-bfe9a183100f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087997374s
Jun 23 09:28:39.934: INFO: Pod "pod-937d8285-b243-43bb-b95f-bfe9a183100f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139519698s
Jun 23 09:28:41.978: INFO: Pod "pod-937d8285-b243-43bb-b95f-bfe9a183100f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18378925s
Jun 23 09:28:44.026: INFO: Pod "pod-937d8285-b243-43bb-b95f-bfe9a183100f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.231633046s
STEP: Saw pod success
Jun 23 09:28:44.026: INFO: Pod "pod-937d8285-b243-43bb-b95f-bfe9a183100f" satisfied condition "Succeeded or Failed"
Jun 23 09:28:44.069: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-937d8285-b243-43bb-b95f-bfe9a183100f container test-container: <nil>
STEP: delete the pod
Jun 23 09:28:44.168: INFO: Waiting for pod pod-937d8285-b243-43bb-b95f-bfe9a183100f to disappear
Jun 23 09:28:44.211: INFO: Pod pod-937d8285-b243-43bb-b95f-bfe9a183100f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:188
... skipping 6 lines ...
test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:49
    volume on default medium should have the correct mode using FSGroup
    test/e2e/common/storage/empty_dir.go:72
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":7,"skipped":69,"failed":0}
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:28:44.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
STEP: Destroying namespace "services-2458" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":8,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:45.069: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/framework/framework.go:188

... skipping 30 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:28:45.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5108" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":9,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 80 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:47.968: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/framework/framework.go:188

... skipping 206 lines ...
test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  test/e2e/storage/csi_mock_volume.go:332
    should not require VolumeAttach for drivers without attachment
    test/e2e/storage/csi_mock_volume.go:360
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":3,"skipped":30,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 77 lines ...
• [SLOW TEST:19.192 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  should validate Deployment Status endpoints [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":3,"skipped":42,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:53.086: INFO: Only supported for providers [openstack] (not gce)
... skipping 52 lines ...
Jun 23 09:28:40.919: INFO: Pod "pod-should-be-evicted1307f7cd-cc07-446d-8739-bcd10dd7d4dd": Phase="Running", Reason="", readiness=true. Elapsed: 36.914088058s
Jun 23 09:28:42.966: INFO: Pod "pod-should-be-evicted1307f7cd-cc07-446d-8739-bcd10dd7d4dd": Phase="Running", Reason="", readiness=true. Elapsed: 38.960748629s
Jun 23 09:28:45.012: INFO: Pod "pod-should-be-evicted1307f7cd-cc07-446d-8739-bcd10dd7d4dd": Phase="Running", Reason="", readiness=true. Elapsed: 41.006667195s
Jun 23 09:28:47.056: INFO: Pod "pod-should-be-evicted1307f7cd-cc07-446d-8739-bcd10dd7d4dd": Phase="Running", Reason="", readiness=true. Elapsed: 43.050911519s
Jun 23 09:28:49.105: INFO: Pod "pod-should-be-evicted1307f7cd-cc07-446d-8739-bcd10dd7d4dd": Phase="Running", Reason="", readiness=true. Elapsed: 45.099911135s
Jun 23 09:28:51.151: INFO: Pod "pod-should-be-evicted1307f7cd-cc07-446d-8739-bcd10dd7d4dd": Phase="Running", Reason="", readiness=true. Elapsed: 47.146451677s
Jun 23 09:28:53.196: INFO: Pod "pod-should-be-evicted1307f7cd-cc07-446d-8739-bcd10dd7d4dd": Phase="Failed", Reason="Evicted", readiness=false. Elapsed: 49.190794314s
Jun 23 09:28:53.196: INFO: Pod "pod-should-be-evicted1307f7cd-cc07-446d-8739-bcd10dd7d4dd" satisfied condition "terminated due to deadline exceeded"
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  test/e2e/framework/framework.go:188
Jun 23 09:28:53.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6120" for this suite.
... skipping 4 lines ...
test/e2e/node/framework.go:23
  Pod Container lifecycle
  test/e2e/node/pods.go:226
    evicted pods should be terminal
    test/e2e/node/pods.go:302
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal","total":-1,"completed":3,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:53.361: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 107 lines ...
• [SLOW TEST:41.636 seconds]
[sig-storage] PVC Protection
test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  test/e2e/storage/pvc_protection.go:147
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":3,"skipped":26,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:53.653: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 34 lines ...
STEP: Destroying namespace "apply-3024" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  test/e2e/apimachinery/apply.go:59

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","total":-1,"completed":4,"skipped":52,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 132 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:28:55.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-2576" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":2,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:55.937: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 54 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  test/e2e/common/storage/host_path.go:39
[It] should support r/w [NodeConformance]
  test/e2e/common/storage/host_path.go:67
STEP: Creating a pod to test hostPath r/w
Jun 23 09:28:51.730: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9145" to be "Succeeded or Failed"
Jun 23 09:28:51.773: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 43.200999ms
Jun 23 09:28:53.818: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088017807s
Jun 23 09:28:55.876: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145733752s
STEP: Saw pod success
Jun 23 09:28:55.876: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jun 23 09:28:55.922: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Jun 23 09:28:56.017: INFO: Waiting for pod pod-host-path-test to disappear
Jun 23 09:28:56.060: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:188
Jun 23 09:28:56.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9145" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":4,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:56.180: INFO: Only supported for providers [azure] (not gce)
... skipping 35 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:28:56.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3053" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:56.710: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 106 lines ...
Jun 23 09:28:18.712: INFO: PersistentVolumeClaim csi-hostpathpgsd2 found but phase is Pending instead of Bound.
Jun 23 09:28:20.762: INFO: PersistentVolumeClaim csi-hostpathpgsd2 found but phase is Pending instead of Bound.
Jun 23 09:28:22.806: INFO: PersistentVolumeClaim csi-hostpathpgsd2 found but phase is Pending instead of Bound.
Jun 23 09:28:24.850: INFO: PersistentVolumeClaim csi-hostpathpgsd2 found and phase=Bound (8.230308615s)
STEP: Creating pod pod-subpath-test-dynamicpv-skn6
STEP: Creating a pod to test subpath
Jun 23 09:28:24.985: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-skn6" in namespace "provisioning-7747" to be "Succeeded or Failed"
Jun 23 09:28:25.028: INFO: Pod "pod-subpath-test-dynamicpv-skn6": Phase="Pending", Reason="", readiness=false. Elapsed: 43.009002ms
Jun 23 09:28:27.072: INFO: Pod "pod-subpath-test-dynamicpv-skn6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086887686s
Jun 23 09:28:29.116: INFO: Pod "pod-subpath-test-dynamicpv-skn6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131026244s
Jun 23 09:28:31.162: INFO: Pod "pod-subpath-test-dynamicpv-skn6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176799362s
Jun 23 09:28:33.210: INFO: Pod "pod-subpath-test-dynamicpv-skn6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.225352671s
Jun 23 09:28:35.256: INFO: Pod "pod-subpath-test-dynamicpv-skn6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.270901169s
Jun 23 09:28:37.301: INFO: Pod "pod-subpath-test-dynamicpv-skn6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.316216532s
Jun 23 09:28:39.357: INFO: Pod "pod-subpath-test-dynamicpv-skn6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.372552749s
STEP: Saw pod success
Jun 23 09:28:39.358: INFO: Pod "pod-subpath-test-dynamicpv-skn6" satisfied condition "Succeeded or Failed"
Jun 23 09:28:39.408: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-dynamicpv-skn6 container test-container-volume-dynamicpv-skn6: <nil>
STEP: delete the pod
Jun 23 09:28:39.556: INFO: Waiting for pod pod-subpath-test-dynamicpv-skn6 to disappear
Jun 23 09:28:39.602: INFO: Pod pod-subpath-test-dynamicpv-skn6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-skn6
Jun 23 09:28:39.602: INFO: Deleting pod "pod-subpath-test-dynamicpv-skn6" in namespace "provisioning-7747"
... skipping 61 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 78 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:447
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:28:59.060: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
... skipping 39 lines ...
• [SLOW TEST:46.670 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  test/e2e/apps/cronjob.go:191
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":2,"skipped":10,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:28:56.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test downward api env vars
Jun 23 09:28:56.381: INFO: Waiting up to 5m0s for pod "downward-api-613a1c2e-8c92-4147-8d7c-384dcede675e" in namespace "downward-api-4357" to be "Succeeded or Failed"
Jun 23 09:28:56.424: INFO: Pod "downward-api-613a1c2e-8c92-4147-8d7c-384dcede675e": Phase="Pending", Reason="", readiness=false. Elapsed: 42.845139ms
Jun 23 09:28:58.470: INFO: Pod "downward-api-613a1c2e-8c92-4147-8d7c-384dcede675e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089379968s
Jun 23 09:29:00.519: INFO: Pod "downward-api-613a1c2e-8c92-4147-8d7c-384dcede675e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137583845s
STEP: Saw pod success
Jun 23 09:29:00.519: INFO: Pod "downward-api-613a1c2e-8c92-4147-8d7c-384dcede675e" satisfied condition "Succeeded or Failed"
Jun 23 09:29:00.570: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod downward-api-613a1c2e-8c92-4147-8d7c-384dcede675e container dapi-container: <nil>
STEP: delete the pod
Jun 23 09:29:00.669: INFO: Waiting for pod downward-api-613a1c2e-8c92-4147-8d7c-384dcede675e to disappear
Jun 23 09:29:00.715: INFO: Pod downward-api-613a1c2e-8c92-4147-8d7c-384dcede675e no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:188
Jun 23 09:29:00.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4357" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":63,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 102 lines ...
test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  test/e2e/storage/csi_mock_volume.go:1334
    CSIStorageCapacity used, have capacity
    test/e2e/storage/csi_mock_volume.go:1377
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":2,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:01.610: INFO: Driver "local" does not provide raw block - skipping
... skipping 276 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:29:01.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-424" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] version v1
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 87 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:29:03.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2204" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":3,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:03.843: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test downward api env vars
Jun 23 09:28:59.501: INFO: Waiting up to 5m0s for pod "downward-api-a3318511-e376-45cc-a2ce-2ed543d5a412" in namespace "downward-api-7146" to be "Succeeded or Failed"
Jun 23 09:28:59.547: INFO: Pod "downward-api-a3318511-e376-45cc-a2ce-2ed543d5a412": Phase="Pending", Reason="", readiness=false. Elapsed: 46.149773ms
Jun 23 09:29:01.594: INFO: Pod "downward-api-a3318511-e376-45cc-a2ce-2ed543d5a412": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093085611s
Jun 23 09:29:03.642: INFO: Pod "downward-api-a3318511-e376-45cc-a2ce-2ed543d5a412": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.14169487s
STEP: Saw pod success
Jun 23 09:29:03.642: INFO: Pod "downward-api-a3318511-e376-45cc-a2ce-2ed543d5a412" satisfied condition "Succeeded or Failed"
Jun 23 09:29:03.689: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod downward-api-a3318511-e376-45cc-a2ce-2ed543d5a412 container dapi-container: <nil>
STEP: delete the pod
Jun 23 09:29:03.806: INFO: Waiting for pod downward-api-a3318511-e376-45cc-a2ce-2ed543d5a412 to disappear
Jun 23 09:29:03.852: INFO: Pod downward-api-a3318511-e376-45cc-a2ce-2ed543d5a412 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:188
Jun 23 09:29:03.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7146" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:03.988: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
... skipping 135 lines ...
• [SLOW TEST:7.187 seconds]
[sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers]
test/e2e/common/node/framework.go:23
  will start an ephemeral container in an existing pod
  test/e2e/common/node/ephemeral_containers.go:44
------------------------------
{"msg":"PASSED [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] will start an ephemeral container in an existing pod","total":-1,"completed":4,"skipped":31,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:6.762 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:08.514: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 26 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test downward API volume plugin
Jun 23 09:29:02.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bfad96e6-7e9c-4fe4-a4a8-b687d8dac6e9" in namespace "downward-api-5126" to be "Succeeded or Failed"
Jun 23 09:29:02.500: INFO: Pod "downwardapi-volume-bfad96e6-7e9c-4fe4-a4a8-b687d8dac6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 48.396771ms
Jun 23 09:29:04.550: INFO: Pod "downwardapi-volume-bfad96e6-7e9c-4fe4-a4a8-b687d8dac6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097791951s
Jun 23 09:29:06.598: INFO: Pod "downwardapi-volume-bfad96e6-7e9c-4fe4-a4a8-b687d8dac6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146267719s
Jun 23 09:29:08.647: INFO: Pod "downwardapi-volume-bfad96e6-7e9c-4fe4-a4a8-b687d8dac6e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.195481981s
STEP: Saw pod success
Jun 23 09:29:08.647: INFO: Pod "downwardapi-volume-bfad96e6-7e9c-4fe4-a4a8-b687d8dac6e9" satisfied condition "Succeeded or Failed"
Jun 23 09:29:08.695: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod downwardapi-volume-bfad96e6-7e9c-4fe4-a4a8-b687d8dac6e9 container client-container: <nil>
STEP: delete the pod
Jun 23 09:29:08.802: INFO: Waiting for pod downwardapi-volume-bfad96e6-7e9c-4fe4-a4a8-b687d8dac6e9 to disappear
Jun 23 09:29:08.864: INFO: Pod downwardapi-volume-bfad96e6-7e9c-4fe4-a4a8-b687d8dac6e9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:6.913 seconds]
[sig-storage] Downward API volume
test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:08.990: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  test/e2e/framework/framework.go:188

... skipping 134 lines ...
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver "csi-hostpath" does not support topology - skipping

      test/e2e/storage/testsuites/topology.go:93
------------------------------
... skipping 47 lines ...
Jun 23 09:28:20.902: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:28:22.952: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:28:24.948: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:28:26.950: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:28:28.947: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:28:30.959: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 23 09:28:30.959: INFO: Running '/logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=services-5957 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.140.41:80 && echo service-down-failed'
Jun 23 09:28:33.536: INFO: rc: 28
Jun 23 09:28:33.536: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.69.140.41:80 && echo service-down-failed" in pod services-5957/verify-service-down-host-exec-pod: error running /logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=services-5957 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.140.41:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.69.140.41:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5957
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Jun 23 09:28:33.680: INFO: Creating new host exec pod
Jun 23 09:28:33.774: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:28:35.822: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:28:37.820: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:28:39.819: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:28:41.819: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:28:43.820: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 23 09:28:43.820: INFO: Running '/logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=services-5957 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.164.154:80 && echo service-down-failed'
Jun 23 09:28:46.371: INFO: rc: 28
Jun 23 09:28:46.372: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.68.164.154:80 && echo service-down-failed" in pod services-5957/verify-service-down-host-exec-pod: error running /logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=services-5957 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.164.154:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.68.164.154:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5957
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Jun 23 09:28:46.518: INFO: Creating new host exec pod
... skipping 15 lines ...
STEP: Deleting pod verify-service-up-exec-pod-pgdmg in namespace services-5957
STEP: verifying service-headless is still not up
Jun 23 09:29:02.542: INFO: Creating new host exec pod
Jun 23 09:29:02.635: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:29:04.683: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 09:29:06.680: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 23 09:29:06.680: INFO: Running '/logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=services-5957 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.140.41:80 && echo service-down-failed'
Jun 23 09:29:09.234: INFO: rc: 28
Jun 23 09:29:09.234: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.69.140.41:80 && echo service-down-failed" in pod services-5957/verify-service-down-host-exec-pod: error running /logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=services-5957 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.140.41:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.69.140.41:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5957
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:188
Jun 23 09:29:09.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:81.691 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  test/e2e/network/service.go:2207
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":1,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:09.478: INFO: Only supported for providers [openstack] (not gce)
... skipping 99 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:380
    should support inline execution and attach
    test/e2e/kubectl/kubectl.go:564
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":3,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:09.923: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 93 lines ...
Jun 23 09:28:23.625: INFO: PersistentVolumeClaim pvc-pm5rs found and phase=Bound (2.088739409s)
STEP: Deleting the previously created pod
Jun 23 09:28:48.850: INFO: Deleting pod "pvc-volume-tester-j7mts" in namespace "csi-mock-volumes-5000"
Jun 23 09:28:48.895: INFO: Wait up to 5m0s for pod "pvc-volume-tester-j7mts" to be fully deleted
STEP: Checking CSI driver logs
Jun 23 09:28:55.036: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IktlS1BRUjQzNjBoWUpBbWlLcHd3b2xtZVNmb2lSLVEzWGM3LUtNM2lmNm8ifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2NTU5NzcxMTUsImlhdCI6MTY1NTk3NjUxNSwiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLXByMTM4NTcucHVsbC1rb3BzLWUyZS1rOHMtZ2NlLms4cy5sb2NhbCIsImt1YmVybmV0ZXMuaW8iOnsibmFtZXNwYWNlIjoiY3NpLW1vY2stdm9sdW1lcy01MDAwIiwicG9kIjp7Im5hbWUiOiJwdmMtdm9sdW1lLXRlc3Rlci1qN210cyIsInVpZCI6IjcxNzU0YzE3LTZmZDUtNDk2Zi1iMzQ5LTEyMTFlMmIyNWQ2NSJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGVmYXVsdCIsInVpZCI6IjRkZGU5OWFjLWVhZjUtNDAyMC05NDJlLWFmMGQzZjljMjRhOCJ9fSwibmJmIjoxNjU1OTc2NTE1LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y3NpLW1vY2stdm9sdW1lcy01MDAwOmRlZmF1bHQifQ.bboXlPYyNyEazzH49x0FE_CpOWQ4slEC9JqgTsVCo1y7UCclOLLzG7JuV50g8rx465DwbAKStBVlqUF54drYua-XBbIgfl7M5yhJ2-P4OAa__Zpqi02kRi1X4WcFUIcW7hZ7ter_EGuvPJIhxc2YBIWMI-udcozkkQnBUvl_ndlhNgTUWV2XRNcDgkIjlY80qQduFrp3CjOdGuZUQdEI0FPHQOjO7cpwmgFAOzy2550szU-176jjdZA0ipeCpkQsULji_UOsZvGejmk8NUJmrgHWV80uC_0jBgn9HdjwTKLqm1JCOpCvNiGIXA0__6hF9C5rdMErHpTlYzXuoDm3jw","expirationTimestamp":"2022-06-23T09:38:35Z"}}
Jun 23 09:28:55.036: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"d2ce0ae0-f2d6-11ec-ae4c-5a93c262912a","target_path":"/var/lib/kubelet/pods/71754c17-6fd5-496f-b349-1211e2b25d65/volumes/kubernetes.io~csi/pvc-d31c46cc-6e8f-4a75-9fb5-d36a3ec5c104/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-j7mts
Jun 23 09:28:55.036: INFO: Deleting pod "pvc-volume-tester-j7mts" in namespace "csi-mock-volumes-5000"
STEP: Deleting claim pvc-pm5rs
Jun 23 09:28:55.179: INFO: Waiting up to 2m0s for PersistentVolume pvc-d31c46cc-6e8f-4a75-9fb5-d36a3ec5c104 to get deleted
Jun 23 09:28:55.235: INFO: PersistentVolume pvc-d31c46cc-6e8f-4a75-9fb5-d36a3ec5c104 found and phase=Released (55.80134ms)
Jun 23 09:28:57.280: INFO: PersistentVolume pvc-d31c46cc-6e8f-4a75-9fb5-d36a3ec5c104 was removed
... skipping 45 lines ...
test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  test/e2e/storage/csi_mock_volume.go:1574
    token should be plumbed down when csiServiceAccountTokenEnabled=true
    test/e2e/storage/csi_mock_volume.go:1602
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":3,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 23 lines ...
Jun 23 09:28:58.802: INFO: PersistentVolumeClaim pvc-n4rpf found but phase is Pending instead of Bound.
Jun 23 09:29:00.847: INFO: PersistentVolumeClaim pvc-n4rpf found and phase=Bound (10.286330778s)
Jun 23 09:29:00.847: INFO: Waiting up to 3m0s for PersistentVolume local-c7vpw to have phase Bound
Jun 23 09:29:00.892: INFO: PersistentVolume local-c7vpw found and phase=Bound (44.627189ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-t7dp
STEP: Creating a pod to test subpath
Jun 23 09:29:01.049: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-t7dp" in namespace "provisioning-8444" to be "Succeeded or Failed"
Jun 23 09:29:01.097: INFO: Pod "pod-subpath-test-preprovisionedpv-t7dp": Phase="Pending", Reason="", readiness=false. Elapsed: 47.683768ms
Jun 23 09:29:03.143: INFO: Pod "pod-subpath-test-preprovisionedpv-t7dp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093550093s
Jun 23 09:29:05.188: INFO: Pod "pod-subpath-test-preprovisionedpv-t7dp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139153574s
Jun 23 09:29:07.239: INFO: Pod "pod-subpath-test-preprovisionedpv-t7dp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189521401s
Jun 23 09:29:09.290: INFO: Pod "pod-subpath-test-preprovisionedpv-t7dp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240719825s
Jun 23 09:29:11.336: INFO: Pod "pod-subpath-test-preprovisionedpv-t7dp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.287296918s
STEP: Saw pod success
Jun 23 09:29:11.336: INFO: Pod "pod-subpath-test-preprovisionedpv-t7dp" satisfied condition "Succeeded or Failed"
Jun 23 09:29:11.382: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-subpath-test-preprovisionedpv-t7dp container test-container-subpath-preprovisionedpv-t7dp: <nil>
STEP: delete the pod
Jun 23 09:29:11.481: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-t7dp to disappear
Jun 23 09:29:11.525: INFO: Pod pod-subpath-test-preprovisionedpv-t7dp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-t7dp
Jun 23 09:29:11.525: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-t7dp" in namespace "provisioning-8444"
... skipping 26 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:221
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 58 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:380
    should contain last line of the log
    test/e2e/kubectl/kubectl.go:624
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":10,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:12.883: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 39 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:29:13.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7441" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:13.399: INFO: Only supported for providers [aws] (not gce)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Only supported for providers [aws] (not gce)

      test/e2e/storage/drivers/in_tree.go:1720
------------------------------
... skipping 123 lines ...
• [SLOW TEST:17.093 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":6,"skipped":53,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 12 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:29:14.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-180" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":7,"skipped":59,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:14.955: INFO: Only supported for providers [vsphere] (not gce)
... skipping 64 lines ...
• [SLOW TEST:86.442 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should be restarted startup probe fails
  test/e2e/common/node/container_probe.go:327
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_downwardapi.go:108
STEP: Creating a pod to test downward API volume plugin
Jun 23 09:29:10.366: INFO: Waiting up to 5m0s for pod "metadata-volume-9a5cb0a8-67e6-4efc-99c8-7a0649628980" in namespace "projected-3006" to be "Succeeded or Failed"
Jun 23 09:29:10.410: INFO: Pod "metadata-volume-9a5cb0a8-67e6-4efc-99c8-7a0649628980": Phase="Pending", Reason="", readiness=false. Elapsed: 43.718076ms
Jun 23 09:29:12.455: INFO: Pod "metadata-volume-9a5cb0a8-67e6-4efc-99c8-7a0649628980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08942209s
Jun 23 09:29:14.500: INFO: Pod "metadata-volume-9a5cb0a8-67e6-4efc-99c8-7a0649628980": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133978129s
Jun 23 09:29:16.551: INFO: Pod "metadata-volume-9a5cb0a8-67e6-4efc-99c8-7a0649628980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.18530084s
STEP: Saw pod success
Jun 23 09:29:16.551: INFO: Pod "metadata-volume-9a5cb0a8-67e6-4efc-99c8-7a0649628980" satisfied condition "Succeeded or Failed"
Jun 23 09:29:16.598: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod metadata-volume-9a5cb0a8-67e6-4efc-99c8-7a0649628980 container client-container: <nil>
STEP: delete the pod
Jun 23 09:29:16.708: INFO: Waiting for pod metadata-volume-9a5cb0a8-67e6-4efc-99c8-7a0649628980 to disappear
Jun 23 09:29:16.756: INFO: Pod metadata-volume-9a5cb0a8-67e6-4efc-99c8-7a0649628980 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:6.865 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_downwardapi.go:108
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Conntrack
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 107 lines ...
• [SLOW TEST:20.143 seconds]
[sig-node] crictl
test/e2e/node/framework.go:23
  should be able to run crictl on the node
  test/e2e/node/crictl.go:40
------------------------------
{"msg":"PASSED [sig-node] crictl should be able to run crictl on the node","total":-1,"completed":4,"skipped":65,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:29:14.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  test/e2e/node/security_context.go:111
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 23 09:29:15.368: INFO: Waiting up to 5m0s for pod "security-context-8a7b578b-c9cf-477c-8be4-5abb52c0e690" in namespace "security-context-9581" to be "Succeeded or Failed"
Jun 23 09:29:15.416: INFO: Pod "security-context-8a7b578b-c9cf-477c-8be4-5abb52c0e690": Phase="Pending", Reason="", readiness=false. Elapsed: 48.721233ms
Jun 23 09:29:17.467: INFO: Pod "security-context-8a7b578b-c9cf-477c-8be4-5abb52c0e690": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099505247s
Jun 23 09:29:19.559: INFO: Pod "security-context-8a7b578b-c9cf-477c-8be4-5abb52c0e690": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191134505s
Jun 23 09:29:21.606: INFO: Pod "security-context-8a7b578b-c9cf-477c-8be4-5abb52c0e690": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.237906503s
STEP: Saw pod success
Jun 23 09:29:21.606: INFO: Pod "security-context-8a7b578b-c9cf-477c-8be4-5abb52c0e690" satisfied condition "Succeeded or Failed"
Jun 23 09:29:21.649: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod security-context-8a7b578b-c9cf-477c-8be4-5abb52c0e690 container test-container: <nil>
STEP: delete the pod
Jun 23 09:29:21.776: INFO: Waiting for pod security-context-8a7b578b-c9cf-477c-8be4-5abb52c0e690 to disappear
Jun 23 09:29:21.820: INFO: Pod security-context-8a7b578b-c9cf-477c-8be4-5abb52c0e690 no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:188
... skipping 164 lines ...
• [SLOW TEST:10.601 seconds]
[sig-apps] StatefulSet
test/e2e/apps/framework.go:23
  MinReadySeconds should be honored when enabled
  test/e2e/apps/statefulset.go:1152
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet MinReadySeconds should be honored when enabled","total":-1,"completed":11,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:23.525: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:29:24.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-8200" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":12,"skipped":77,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:24.334: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 209 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:30.590: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 101 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      test/e2e/storage/testsuites/volumemode.go:354
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:14.446 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":5,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:31.344: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 290 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should store data
      test/e2e/storage/testsuites/volumes.go:161
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":54,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 105 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 84 lines ...
      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":8,"skipped":66,"failed":0}
[BeforeEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:29:21.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
• [SLOW TEST:12.635 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:34.587: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
• [SLOW TEST:46.712 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should not create pods when created in suspend state
  test/e2e/apps/job.go:103
------------------------------
{"msg":"PASSED [sig-apps] Job should not create pods when created in suspend state","total":-1,"completed":6,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:34.803: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support GenericEphemeralVolume -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":2,"skipped":32,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:29:22.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
• [SLOW TEST:12.889 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  test/e2e/apps/disruption.go:289
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":3,"skipped":32,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:34.945: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 132 lines ...
• [SLOW TEST:26.448 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:35.997: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support GenericEphemeralVolume -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":3,"skipped":12,"failed":0}
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:29:19.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:16.604 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  test/e2e/apps/job.go:81
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":4,"skipped":12,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:29:34.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
  test/e2e/node/security_context.go:171
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 23 09:29:35.321: INFO: Waiting up to 5m0s for pod "security-context-665ae175-b907-4e38-979e-a638496031d0" in namespace "security-context-3936" to be "Succeeded or Failed"
Jun 23 09:29:35.363: INFO: Pod "security-context-665ae175-b907-4e38-979e-a638496031d0": Phase="Pending", Reason="", readiness=false. Elapsed: 41.884248ms
Jun 23 09:29:37.411: INFO: Pod "security-context-665ae175-b907-4e38-979e-a638496031d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09068131s
Jun 23 09:29:39.462: INFO: Pod "security-context-665ae175-b907-4e38-979e-a638496031d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141483311s
Jun 23 09:29:41.505: INFO: Pod "security-context-665ae175-b907-4e38-979e-a638496031d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.184172831s
STEP: Saw pod success
Jun 23 09:29:41.505: INFO: Pod "security-context-665ae175-b907-4e38-979e-a638496031d0" satisfied condition "Succeeded or Failed"
Jun 23 09:29:41.550: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod security-context-665ae175-b907-4e38-979e-a638496031d0 container test-container: <nil>
STEP: delete the pod
Jun 23 09:29:41.662: INFO: Waiting for pod security-context-665ae175-b907-4e38-979e-a638496031d0 to disappear
Jun 23 09:29:41.705: INFO: Pod security-context-665ae175-b907-4e38-979e-a638496031d0 no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:188
... skipping 16 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:51
[It] volume on tmpfs should have the correct mode using FSGroup
  test/e2e/common/storage/empty_dir.go:76
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun 23 09:29:31.868: INFO: Waiting up to 5m0s for pod "pod-58777436-1a53-4783-ad29-5ee3da94442a" in namespace "emptydir-5999" to be "Succeeded or Failed"
Jun 23 09:29:31.913: INFO: Pod "pod-58777436-1a53-4783-ad29-5ee3da94442a": Phase="Pending", Reason="", readiness=false. Elapsed: 44.59656ms
Jun 23 09:29:33.964: INFO: Pod "pod-58777436-1a53-4783-ad29-5ee3da94442a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096055355s
Jun 23 09:29:36.010: INFO: Pod "pod-58777436-1a53-4783-ad29-5ee3da94442a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141483789s
Jun 23 09:29:38.064: INFO: Pod "pod-58777436-1a53-4783-ad29-5ee3da94442a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195689496s
Jun 23 09:29:40.113: INFO: Pod "pod-58777436-1a53-4783-ad29-5ee3da94442a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.244481989s
Jun 23 09:29:42.160: INFO: Pod "pod-58777436-1a53-4783-ad29-5ee3da94442a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.291949395s
STEP: Saw pod success
Jun 23 09:29:42.160: INFO: Pod "pod-58777436-1a53-4783-ad29-5ee3da94442a" satisfied condition "Succeeded or Failed"
Jun 23 09:29:42.205: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-58777436-1a53-4783-ad29-5ee3da94442a container test-container: <nil>
STEP: delete the pod
Jun 23 09:29:42.309: INFO: Waiting for pod pod-58777436-1a53-4783-ad29-5ee3da94442a to disappear
Jun 23 09:29:42.377: INFO: Pod pod-58777436-1a53-4783-ad29-5ee3da94442a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:188
... skipping 39 lines ...
• [SLOW TEST:81.242 seconds]
[sig-storage] Projected configMap
test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:42.550: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 190 lines ...
Jun 23 09:29:15.694: INFO: PersistentVolumeClaim pvc-xmdk5 found but phase is Pending instead of Bound.
Jun 23 09:29:17.740: INFO: PersistentVolumeClaim pvc-xmdk5 found and phase=Bound (16.399942092s)
Jun 23 09:29:17.740: INFO: Waiting up to 3m0s for PersistentVolume local-zzlcs to have phase Bound
Jun 23 09:29:17.785: INFO: PersistentVolume local-zzlcs found and phase=Bound (44.192716ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-l58x
STEP: Creating a pod to test subpath
Jun 23 09:29:17.953: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-l58x" in namespace "provisioning-2533" to be "Succeeded or Failed"
Jun 23 09:29:18.018: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Pending", Reason="", readiness=false. Elapsed: 64.863763ms
Jun 23 09:29:20.085: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130982649s
Jun 23 09:29:22.137: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183504269s
Jun 23 09:29:24.182: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228095038s
Jun 23 09:29:26.234: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280781077s
Jun 23 09:29:28.281: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.327353761s
STEP: Saw pod success
Jun 23 09:29:28.281: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x" satisfied condition "Succeeded or Failed"
Jun 23 09:29:28.336: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-subpath-test-preprovisionedpv-l58x container test-container-subpath-preprovisionedpv-l58x: <nil>
STEP: delete the pod
Jun 23 09:29:28.445: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-l58x to disappear
Jun 23 09:29:28.490: INFO: Pod pod-subpath-test-preprovisionedpv-l58x no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-l58x
Jun 23 09:29:28.490: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-l58x" in namespace "provisioning-2533"
STEP: Creating pod pod-subpath-test-preprovisionedpv-l58x
STEP: Creating a pod to test subpath
Jun 23 09:29:28.649: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-l58x" in namespace "provisioning-2533" to be "Succeeded or Failed"
Jun 23 09:29:28.702: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Pending", Reason="", readiness=false. Elapsed: 53.229264ms
Jun 23 09:29:30.865: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216557115s
Jun 23 09:29:32.912: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26311049s
Jun 23 09:29:34.958: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.309075955s
Jun 23 09:29:37.006: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.357099518s
Jun 23 09:29:39.055: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.406206135s
Jun 23 09:29:41.102: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.453653019s
STEP: Saw pod success
Jun 23 09:29:41.103: INFO: Pod "pod-subpath-test-preprovisionedpv-l58x" satisfied condition "Succeeded or Failed"
Jun 23 09:29:41.147: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-subpath-test-preprovisionedpv-l58x container test-container-subpath-preprovisionedpv-l58x: <nil>
STEP: delete the pod
Jun 23 09:29:41.258: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-l58x to disappear
Jun 23 09:29:41.324: INFO: Pod pod-subpath-test-preprovisionedpv-l58x no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-l58x
Jun 23 09:29:41.324: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-l58x" in namespace "provisioning-2533"
... skipping 49 lines ...
[It] should support non-existent path
  test/e2e/storage/testsuites/subpath.go:196
Jun 23 09:29:36.634: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 23 09:29:36.634: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-4wn7
STEP: Creating a pod to test subpath
Jun 23 09:29:36.685: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-4wn7" in namespace "provisioning-4529" to be "Succeeded or Failed"
Jun 23 09:29:36.728: INFO: Pod "pod-subpath-test-inlinevolume-4wn7": Phase="Pending", Reason="", readiness=false. Elapsed: 43.037347ms
Jun 23 09:29:38.775: INFO: Pod "pod-subpath-test-inlinevolume-4wn7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089624345s
Jun 23 09:29:40.832: INFO: Pod "pod-subpath-test-inlinevolume-4wn7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147237063s
Jun 23 09:29:42.876: INFO: Pod "pod-subpath-test-inlinevolume-4wn7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190739746s
STEP: Saw pod success
Jun 23 09:29:42.876: INFO: Pod "pod-subpath-test-inlinevolume-4wn7" satisfied condition "Succeeded or Failed"
Jun 23 09:29:42.923: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-inlinevolume-4wn7 container test-container-volume-inlinevolume-4wn7: <nil>
STEP: delete the pod
Jun 23 09:29:43.063: INFO: Waiting for pod pod-subpath-test-inlinevolume-4wn7 to disappear
Jun 23 09:29:43.110: INFO: Pod pod-subpath-test-inlinevolume-4wn7 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-4wn7
Jun 23 09:29:43.110: INFO: Deleting pod "pod-subpath-test-inlinevolume-4wn7" in namespace "provisioning-4529"
... skipping 12 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:43.306: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/framework/framework.go:188

... skipping 65 lines ...
Jun 23 09:29:30.569: INFO: PersistentVolumeClaim pvc-2j7qw found but phase is Pending instead of Bound.
Jun 23 09:29:32.615: INFO: PersistentVolumeClaim pvc-2j7qw found and phase=Bound (14.442604567s)
Jun 23 09:29:32.615: INFO: Waiting up to 3m0s for PersistentVolume local-7lzd7 to have phase Bound
Jun 23 09:29:32.658: INFO: PersistentVolume local-7lzd7 found and phase=Bound (43.05167ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-x7pd
STEP: Creating a pod to test subpath
Jun 23 09:29:32.805: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-x7pd" in namespace "provisioning-8843" to be "Succeeded or Failed"
Jun 23 09:29:32.848: INFO: Pod "pod-subpath-test-preprovisionedpv-x7pd": Phase="Pending", Reason="", readiness=false. Elapsed: 43.558019ms
Jun 23 09:29:34.892: INFO: Pod "pod-subpath-test-preprovisionedpv-x7pd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086902297s
Jun 23 09:29:36.936: INFO: Pod "pod-subpath-test-preprovisionedpv-x7pd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131487853s
Jun 23 09:29:38.982: INFO: Pod "pod-subpath-test-preprovisionedpv-x7pd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177222807s
Jun 23 09:29:41.029: INFO: Pod "pod-subpath-test-preprovisionedpv-x7pd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22397573s
Jun 23 09:29:43.084: INFO: Pod "pod-subpath-test-preprovisionedpv-x7pd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.27922832s
STEP: Saw pod success
Jun 23 09:29:43.084: INFO: Pod "pod-subpath-test-preprovisionedpv-x7pd" satisfied condition "Succeeded or Failed"
Jun 23 09:29:43.129: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-subpath-test-preprovisionedpv-x7pd container test-container-subpath-preprovisionedpv-x7pd: <nil>
STEP: delete the pod
Jun 23 09:29:43.230: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-x7pd to disappear
Jun 23 09:29:43.276: INFO: Pod pod-subpath-test-preprovisionedpv-x7pd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-x7pd
Jun 23 09:29:43.276: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-x7pd" in namespace "provisioning-8843"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:221
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:13.385 seconds]
[sig-storage] Downward API volume
test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:44.555: INFO: Only supported for providers [vsphere] (not gce)
... skipping 49 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:29:45.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-1540" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:45.189: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 64 lines ...
• [SLOW TEST:72.806 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should observe that the PodDisruptionBudget status is not updated for unmanaged pods
  test/e2e/apps/disruption.go:194
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods","total":-1,"completed":2,"skipped":12,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:47.006: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  test/e2e/node/security_context.go:178
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 23 09:29:35.579: INFO: Waiting up to 5m0s for pod "security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97" in namespace "security-context-7546" to be "Succeeded or Failed"
Jun 23 09:29:35.622: INFO: Pod "security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97": Phase="Pending", Reason="", readiness=false. Elapsed: 42.230779ms
Jun 23 09:29:37.676: INFO: Pod "security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09626425s
Jun 23 09:29:39.722: INFO: Pod "security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142420815s
Jun 23 09:29:41.772: INFO: Pod "security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192900911s
Jun 23 09:29:43.828: INFO: Pod "security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97": Phase="Pending", Reason="", readiness=false. Elapsed: 8.248318172s
Jun 23 09:29:45.873: INFO: Pod "security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97": Phase="Pending", Reason="", readiness=false. Elapsed: 10.293964998s
Jun 23 09:29:47.929: INFO: Pod "security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97": Phase="Pending", Reason="", readiness=false. Elapsed: 12.349665423s
Jun 23 09:29:49.974: INFO: Pod "security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.394475361s
STEP: Saw pod success
Jun 23 09:29:49.974: INFO: Pod "security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97" satisfied condition "Succeeded or Failed"
Jun 23 09:29:50.018: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97 container test-container: <nil>
STEP: delete the pod
Jun 23 09:29:50.115: INFO: Waiting for pod security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97 to disappear
Jun 23 09:29:50.158: INFO: Pod security-context-1c1581e1-6f31-486f-a10f-34a5bb6aae97 no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:15.033 seconds]
[sig-node] Security Context
test/e2e/node/framework.go:23
  should support seccomp runtime/default [LinuxOnly]
  test/e2e/node/security_context.go:178
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":7,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:50.295: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 123 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should store data
      test/e2e/storage/testsuites/volumes.go:161
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Secrets
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: creating secret secrets-9639/secret-test-820ba0b9-7e2c-4458-9174-3b88021094aa
STEP: Creating a pod to test consume secrets
Jun 23 09:29:45.607: INFO: Waiting up to 5m0s for pod "pod-configmaps-c5d62d96-2aa2-4499-9090-652cc2916244" in namespace "secrets-9639" to be "Succeeded or Failed"
Jun 23 09:29:45.652: INFO: Pod "pod-configmaps-c5d62d96-2aa2-4499-9090-652cc2916244": Phase="Pending", Reason="", readiness=false. Elapsed: 44.361743ms
Jun 23 09:29:47.697: INFO: Pod "pod-configmaps-c5d62d96-2aa2-4499-9090-652cc2916244": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08963597s
Jun 23 09:29:49.742: INFO: Pod "pod-configmaps-c5d62d96-2aa2-4499-9090-652cc2916244": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134594449s
Jun 23 09:29:51.786: INFO: Pod "pod-configmaps-c5d62d96-2aa2-4499-9090-652cc2916244": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.178873704s
STEP: Saw pod success
Jun 23 09:29:51.787: INFO: Pod "pod-configmaps-c5d62d96-2aa2-4499-9090-652cc2916244" satisfied condition "Succeeded or Failed"
Jun 23 09:29:51.830: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-configmaps-c5d62d96-2aa2-4499-9090-652cc2916244 container env-test: <nil>
STEP: delete the pod
Jun 23 09:29:51.940: INFO: Waiting for pod pod-configmaps-c5d62d96-2aa2-4499-9090-652cc2916244 to disappear
Jun 23 09:29:51.987: INFO: Pod pod-configmaps-c5d62d96-2aa2-4499-9090-652cc2916244 no longer exists
[AfterEach] [sig-node] Secrets
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:6.878 seconds]
[sig-node] Secrets
test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:52.100: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/framework/framework.go:188

... skipping 68 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating secret with name projected-secret-test-4f627b55-b0db-448d-90c7-12e80070c03f
STEP: Creating a pod to test consume secrets
Jun 23 09:29:43.761: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-78b4567e-bf30-45b3-a927-5efc4cc62fdb" in namespace "projected-1135" to be "Succeeded or Failed"
Jun 23 09:29:43.808: INFO: Pod "pod-projected-secrets-78b4567e-bf30-45b3-a927-5efc4cc62fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 46.919953ms
Jun 23 09:29:45.858: INFO: Pod "pod-projected-secrets-78b4567e-bf30-45b3-a927-5efc4cc62fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096981768s
Jun 23 09:29:47.913: INFO: Pod "pod-projected-secrets-78b4567e-bf30-45b3-a927-5efc4cc62fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151570119s
Jun 23 09:29:49.957: INFO: Pod "pod-projected-secrets-78b4567e-bf30-45b3-a927-5efc4cc62fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196498518s
Jun 23 09:29:52.005: INFO: Pod "pod-projected-secrets-78b4567e-bf30-45b3-a927-5efc4cc62fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.244279646s
Jun 23 09:29:54.063: INFO: Pod "pod-projected-secrets-78b4567e-bf30-45b3-a927-5efc4cc62fdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.302404706s
STEP: Saw pod success
Jun 23 09:29:54.063: INFO: Pod "pod-projected-secrets-78b4567e-bf30-45b3-a927-5efc4cc62fdb" satisfied condition "Succeeded or Failed"
Jun 23 09:29:54.108: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-projected-secrets-78b4567e-bf30-45b3-a927-5efc4cc62fdb container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 09:29:54.209: INFO: Waiting for pod pod-projected-secrets-78b4567e-bf30-45b3-a927-5efc4cc62fdb to disappear
Jun 23 09:29:54.252: INFO: Pod pod-projected-secrets-78b4567e-bf30-45b3-a927-5efc4cc62fdb no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:11.014 seconds]
[sig-storage] Projected secret
test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] PodOSRejection [NodeConformance]
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 7 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:29:54.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-os-rejection-6949" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS","total":-1,"completed":7,"skipped":22,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:5.062 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":6,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:57.079: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/framework/framework.go:188

... skipping 42 lines ...
Jun 23 09:29:44.720: INFO: PersistentVolumeClaim pvc-vx6gg found but phase is Pending instead of Bound.
Jun 23 09:29:46.767: INFO: PersistentVolumeClaim pvc-vx6gg found and phase=Bound (10.292121739s)
Jun 23 09:29:46.767: INFO: Waiting up to 3m0s for PersistentVolume local-4vzmb to have phase Bound
Jun 23 09:29:46.812: INFO: PersistentVolume local-4vzmb found and phase=Bound (45.112001ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-52l6
STEP: Creating a pod to test subpath
Jun 23 09:29:46.964: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-52l6" in namespace "provisioning-8868" to be "Succeeded or Failed"
Jun 23 09:29:47.012: INFO: Pod "pod-subpath-test-preprovisionedpv-52l6": Phase="Pending", Reason="", readiness=false. Elapsed: 48.799596ms
Jun 23 09:29:49.061: INFO: Pod "pod-subpath-test-preprovisionedpv-52l6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096991883s
Jun 23 09:29:51.108: INFO: Pod "pod-subpath-test-preprovisionedpv-52l6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14469852s
Jun 23 09:29:53.158: INFO: Pod "pod-subpath-test-preprovisionedpv-52l6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.194363958s
STEP: Saw pod success
Jun 23 09:29:53.158: INFO: Pod "pod-subpath-test-preprovisionedpv-52l6" satisfied condition "Succeeded or Failed"
Jun 23 09:29:53.264: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-subpath-test-preprovisionedpv-52l6 container test-container-subpath-preprovisionedpv-52l6: <nil>
STEP: delete the pod
Jun 23 09:29:53.410: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-52l6 to disappear
Jun 23 09:29:53.456: INFO: Pod pod-subpath-test-preprovisionedpv-52l6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-52l6
Jun 23 09:29:53.456: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-52l6" in namespace "provisioning-8868"
STEP: Creating pod pod-subpath-test-preprovisionedpv-52l6
STEP: Creating a pod to test subpath
Jun 23 09:29:53.563: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-52l6" in namespace "provisioning-8868" to be "Succeeded or Failed"
Jun 23 09:29:53.634: INFO: Pod "pod-subpath-test-preprovisionedpv-52l6": Phase="Pending", Reason="", readiness=false. Elapsed: 71.021017ms
Jun 23 09:29:55.690: INFO: Pod "pod-subpath-test-preprovisionedpv-52l6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12757687s
Jun 23 09:29:57.738: INFO: Pod "pod-subpath-test-preprovisionedpv-52l6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.175146591s
STEP: Saw pod success
Jun 23 09:29:57.738: INFO: Pod "pod-subpath-test-preprovisionedpv-52l6" satisfied condition "Succeeded or Failed"
Jun 23 09:29:57.785: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-subpath-test-preprovisionedpv-52l6 container test-container-subpath-preprovisionedpv-52l6: <nil>
STEP: delete the pod
Jun 23 09:29:57.885: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-52l6 to disappear
Jun 23 09:29:57.935: INFO: Pod pod-subpath-test-preprovisionedpv-52l6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-52l6
Jun 23 09:29:57.935: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-52l6" in namespace "provisioning-8868"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      test/e2e/storage/testsuites/subpath.go:397
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":64,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:29:58.662: INFO: Only supported for providers [vsphere] (not gce)
... skipping 37 lines ...
      Only supported for providers [openstack] (not gce)

      test/e2e/storage/drivers/in_tree.go:1092
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:29:43.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  test/e2e/kubectl/portforward.go:454
    should support forwarding over websockets
    test/e2e/kubectl/portforward.go:470
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":5,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 36 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:30:00.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7745" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":5,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:00.803: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 180 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    should not deadlock when a pod's predecessor fails
    test/e2e/apps/statefulset.go:256
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":1,"skipped":3,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 25 lines ...
Jun 23 09:29:43.849: INFO: PersistentVolumeClaim pvc-g9qw2 found but phase is Pending instead of Bound.
Jun 23 09:29:45.894: INFO: PersistentVolumeClaim pvc-g9qw2 found and phase=Bound (14.396037628s)
Jun 23 09:29:45.894: INFO: Waiting up to 3m0s for PersistentVolume local-97jm4 to have phase Bound
Jun 23 09:29:45.936: INFO: PersistentVolume local-97jm4 found and phase=Bound (42.132826ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-n9x7
STEP: Creating a pod to test subpath
Jun 23 09:29:46.069: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-n9x7" in namespace "provisioning-3312" to be "Succeeded or Failed"
Jun 23 09:29:46.113: INFO: Pod "pod-subpath-test-preprovisionedpv-n9x7": Phase="Pending", Reason="", readiness=false. Elapsed: 43.81679ms
Jun 23 09:29:48.158: INFO: Pod "pod-subpath-test-preprovisionedpv-n9x7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088968962s
Jun 23 09:29:50.219: INFO: Pod "pod-subpath-test-preprovisionedpv-n9x7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149662701s
Jun 23 09:29:52.268: INFO: Pod "pod-subpath-test-preprovisionedpv-n9x7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19838736s
Jun 23 09:29:54.313: INFO: Pod "pod-subpath-test-preprovisionedpv-n9x7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.243903092s
Jun 23 09:29:56.357: INFO: Pod "pod-subpath-test-preprovisionedpv-n9x7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.288105835s
Jun 23 09:29:58.403: INFO: Pod "pod-subpath-test-preprovisionedpv-n9x7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.333548052s
Jun 23 09:30:00.447: INFO: Pod "pod-subpath-test-preprovisionedpv-n9x7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.377729767s
STEP: Saw pod success
Jun 23 09:30:00.447: INFO: Pod "pod-subpath-test-preprovisionedpv-n9x7" satisfied condition "Succeeded or Failed"
Jun 23 09:30:00.490: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-preprovisionedpv-n9x7 container test-container-volume-preprovisionedpv-n9x7: <nil>
STEP: delete the pod
Jun 23 09:30:00.605: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-n9x7 to disappear
Jun 23 09:30:00.648: INFO: Pod pod-subpath-test-preprovisionedpv-n9x7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-n9x7
Jun 23 09:30:00.648: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-n9x7" in namespace "provisioning-3312"
... skipping 26 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      test/e2e/storage/testsuites/subpath.go:207
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":13,"skipped":100,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:01.875: INFO: Only supported for providers [vsphere] (not gce)
... skipping 56 lines ...
test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  test/e2e/common/node/lifecycle_hook.go:46
    should execute prestop http hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:02.148: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 109 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      test/e2e/storage/testsuites/volumemode.go:354
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:02.595: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":4,"skipped":44,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:29:41.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 49 lines ...
  test/e2e/kubectl/kubectl.go:380
    should return command exit codes
    test/e2e/kubectl/kubectl.go:500
      execing into a container with a failing command
      test/e2e/kubectl/kubectl.go:506
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command","total":-1,"completed":5,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:02.621: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/framework/framework.go:188

... skipping 89 lines ...
• [SLOW TEST:23.873 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should support pod readiness gates [NodeConformance]
  test/e2e/common/node/pods.go:777
------------------------------
{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeConformance]","total":-1,"completed":5,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:07.882: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/framework/framework.go:188

... skipping 194 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":14,"skipped":106,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:10.485 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:11.577: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
... skipping 33 lines ...
      test/e2e/storage/testsuites/volume_expand.go:176

      Driver hostPath doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":5,"skipped":81,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:29:42.903: INFO: >>> kubeConfig: /root/.kube/config
... skipping 21 lines ...
Jun 23 09:30:00.211: INFO: PersistentVolumeClaim pvc-lmp6m found but phase is Pending instead of Bound.
Jun 23 09:30:02.258: INFO: PersistentVolumeClaim pvc-lmp6m found and phase=Bound (14.373814127s)
Jun 23 09:30:02.259: INFO: Waiting up to 3m0s for PersistentVolume local-nrr4n to have phase Bound
Jun 23 09:30:02.319: INFO: PersistentVolume local-nrr4n found and phase=Bound (60.041006ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-stng
STEP: Creating a pod to test subpath
Jun 23 09:30:02.492: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-stng" in namespace "provisioning-6712" to be "Succeeded or Failed"
Jun 23 09:30:02.538: INFO: Pod "pod-subpath-test-preprovisionedpv-stng": Phase="Pending", Reason="", readiness=false. Elapsed: 46.165437ms
Jun 23 09:30:04.589: INFO: Pod "pod-subpath-test-preprovisionedpv-stng": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097580213s
Jun 23 09:30:06.637: INFO: Pod "pod-subpath-test-preprovisionedpv-stng": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144841318s
Jun 23 09:30:08.693: INFO: Pod "pod-subpath-test-preprovisionedpv-stng": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201261492s
Jun 23 09:30:10.763: INFO: Pod "pod-subpath-test-preprovisionedpv-stng": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.271513765s
STEP: Saw pod success
Jun 23 09:30:10.763: INFO: Pod "pod-subpath-test-preprovisionedpv-stng" satisfied condition "Succeeded or Failed"
Jun 23 09:30:10.841: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-subpath-test-preprovisionedpv-stng container test-container-volume-preprovisionedpv-stng: <nil>
STEP: delete the pod
Jun 23 09:30:11.036: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-stng to disappear
Jun 23 09:30:11.158: INFO: Pod pod-subpath-test-preprovisionedpv-stng no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-stng
Jun 23 09:30:11.158: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-stng" in namespace "provisioning-6712"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":81,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:12.928: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun 23 09:30:02.596: INFO: Waiting up to 5m0s for pod "pod-ba6816c3-331d-4ee6-89ec-b0b20413292a" in namespace "emptydir-6723" to be "Succeeded or Failed"
Jun 23 09:30:02.650: INFO: Pod "pod-ba6816c3-331d-4ee6-89ec-b0b20413292a": Phase="Pending", Reason="", readiness=false. Elapsed: 53.870801ms
Jun 23 09:30:04.697: INFO: Pod "pod-ba6816c3-331d-4ee6-89ec-b0b20413292a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101084513s
Jun 23 09:30:06.744: INFO: Pod "pod-ba6816c3-331d-4ee6-89ec-b0b20413292a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148009344s
Jun 23 09:30:08.791: INFO: Pod "pod-ba6816c3-331d-4ee6-89ec-b0b20413292a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195453165s
Jun 23 09:30:10.892: INFO: Pod "pod-ba6816c3-331d-4ee6-89ec-b0b20413292a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.296049628s
Jun 23 09:30:12.941: INFO: Pod "pod-ba6816c3-331d-4ee6-89ec-b0b20413292a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.344863769s
Jun 23 09:30:14.989: INFO: Pod "pod-ba6816c3-331d-4ee6-89ec-b0b20413292a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.393214843s
STEP: Saw pod success
Jun 23 09:30:14.989: INFO: Pod "pod-ba6816c3-331d-4ee6-89ec-b0b20413292a" satisfied condition "Succeeded or Failed"
Jun 23 09:30:15.039: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-ba6816c3-331d-4ee6-89ec-b0b20413292a container test-container: <nil>
STEP: delete the pod
Jun 23 09:30:15.153: INFO: Waiting for pod pod-ba6816c3-331d-4ee6-89ec-b0b20413292a to disappear
Jun 23 09:30:15.203: INFO: Pod pod-ba6816c3-331d-4ee6-89ec-b0b20413292a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:13.181 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:15.386: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 64 lines ...
• [SLOW TEST:132.812 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  test/e2e/apps/cronjob.go:280
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":3,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:15.723: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 258 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      test/e2e/storage/testsuites/ephemeral.go:196
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":5,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:16.047: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 26 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test downward API volume plugin
Jun 23 09:30:13.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3caee7a1-1a7c-4e95-9305-9993d00a3b7c" in namespace "downward-api-8334" to be "Succeeded or Failed"
Jun 23 09:30:13.371: INFO: Pod "downwardapi-volume-3caee7a1-1a7c-4e95-9305-9993d00a3b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.684591ms
Jun 23 09:30:15.435: INFO: Pod "downwardapi-volume-3caee7a1-1a7c-4e95-9305-9993d00a3b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110763804s
Jun 23 09:30:17.482: INFO: Pod "downwardapi-volume-3caee7a1-1a7c-4e95-9305-9993d00a3b7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157116156s
STEP: Saw pod success
Jun 23 09:30:17.482: INFO: Pod "downwardapi-volume-3caee7a1-1a7c-4e95-9305-9993d00a3b7c" satisfied condition "Succeeded or Failed"
Jun 23 09:30:17.527: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod downwardapi-volume-3caee7a1-1a7c-4e95-9305-9993d00a3b7c container client-container: <nil>
STEP: delete the pod
Jun 23 09:30:17.628: INFO: Waiting for pod downwardapi-volume-3caee7a1-1a7c-4e95-9305-9993d00a3b7c to disappear
Jun 23 09:30:17.674: INFO: Pod downwardapi-volume-3caee7a1-1a7c-4e95-9305-9993d00a3b7c no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:188
... skipping 28 lines ...
• [SLOW TEST:16.989 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":6,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:17.886: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:188

... skipping 28 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:30:18.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8369" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:18.600: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/framework/framework.go:188

... skipping 70 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:380
    should support exec using resource/name
    test/e2e/kubectl/kubectl.go:432
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":6,"skipped":39,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Probing container
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:68.329 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should be restarted with a failing exec liveness probe that took longer than the timeout
  test/e2e/common/node/container_probe.go:266
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":7,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:21.891: INFO: Only supported for providers [azure] (not gce)
... skipping 162 lines ...
      Only supported for providers [vsphere] (not gce)

      test/e2e/storage/drivers/in_tree.go:1438
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":57,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:22.020: INFO: Only supported for providers [azure] (not gce)
... skipping 103 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4866
STEP: Waiting until pod test-pod will start running in namespace statefulset-4866
STEP: Creating statefulset with conflicting port in namespace statefulset-4866
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4866
Jun 23 09:29:43.340: INFO: Observed stateful pod in namespace: statefulset-4866, name: ss-0, uid: 7684ad8f-5005-4c14-82c8-18767e9b68a9, status phase: Pending. Waiting for statefulset controller to delete.
Jun 23 09:29:54.860: INFO: Observed stateful pod in namespace: statefulset-4866, name: ss-0, uid: 7684ad8f-5005-4c14-82c8-18767e9b68a9, status phase: Failed. Waiting for statefulset controller to delete.
Jun 23 09:29:54.870: INFO: Observed stateful pod in namespace: statefulset-4866, name: ss-0, uid: 7684ad8f-5005-4c14-82c8-18767e9b68a9, status phase: Failed. Waiting for statefulset controller to delete.
Jun 23 09:29:54.879: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4866
STEP: Removing pod with conflicting port in namespace statefulset-4866
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4866 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:122
Jun 23 09:30:13.448: INFO: Deleting all statefulset in ns statefulset-4866
... skipping 11 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    Should recreate evicted statefulset [Conformance]
    test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":10,"skipped":89,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:24.008: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 21 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test downward API volume plugin
Jun 23 09:30:19.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-015aea3d-d8a6-41b0-8f4c-a6f3421493f2" in namespace "downward-api-4650" to be "Succeeded or Failed"
Jun 23 09:30:19.883: INFO: Pod "downwardapi-volume-015aea3d-d8a6-41b0-8f4c-a6f3421493f2": Phase="Pending", Reason="", readiness=false. Elapsed: 47.363447ms
Jun 23 09:30:21.929: INFO: Pod "downwardapi-volume-015aea3d-d8a6-41b0-8f4c-a6f3421493f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093221285s
Jun 23 09:30:23.997: INFO: Pod "downwardapi-volume-015aea3d-d8a6-41b0-8f4c-a6f3421493f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161123124s
Jun 23 09:30:26.045: INFO: Pod "downwardapi-volume-015aea3d-d8a6-41b0-8f4c-a6f3421493f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209335952s
STEP: Saw pod success
Jun 23 09:30:26.045: INFO: Pod "downwardapi-volume-015aea3d-d8a6-41b0-8f4c-a6f3421493f2" satisfied condition "Succeeded or Failed"
Jun 23 09:30:26.098: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod downwardapi-volume-015aea3d-d8a6-41b0-8f4c-a6f3421493f2 container client-container: <nil>
STEP: delete the pod
Jun 23 09:30:26.214: INFO: Waiting for pod downwardapi-volume-015aea3d-d8a6-41b0-8f4c-a6f3421493f2 to disappear
Jun 23 09:30:26.261: INFO: Pod downwardapi-volume-015aea3d-d8a6-41b0-8f4c-a6f3421493f2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:6.912 seconds]
[sig-storage] Downward API volume
test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":42,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:26.402: INFO: Only supported for providers [azure] (not gce)
... skipping 122 lines ...
test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  test/e2e/storage/csi_mock_volume.go:1334
    CSIStorageCapacity used, no capacity
    test/e2e/storage/csi_mock_volume.go:1377
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":6,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:27.073: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/framework/framework.go:188

... skipping 59 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:30:27.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-3916" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":8,"skipped":58,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:27.227: INFO: Only supported for providers [azure] (not gce)
... skipping 45 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:30:28.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "csistoragecapacity-352" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] CSIStorageCapacity  should support CSIStorageCapacities API operations [Conformance]","total":-1,"completed":9,"skipped":63,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:28.629: INFO: Driver "local" does not provide raw block - skipping
... skipping 12 lines ...
      test/e2e/storage/testsuites/volumes.go:161

      Driver "local" does not provide raw block - skipping

      test/e2e/storage/testsuites/volumes.go:114
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:30:00.821: INFO: >>> kubeConfig: /root/.kube/config
... skipping 64 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:447
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:28.916: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/framework/framework.go:188

... skipping 180 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 23 lines ...
• [SLOW TEST:11.830 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":7,"skipped":86,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 11 lines ...
STEP: Destroying namespace "services-8149" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":8,"skipped":88,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:30.461: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating secret with name secret-test-af7d2fdd-b6fc-4cb6-87a7-a74937e99742
STEP: Creating a pod to test consume secrets
Jun 23 09:30:24.427: INFO: Waiting up to 5m0s for pod "pod-secrets-5ff69cd8-bfcf-4d23-968b-1ffed09b8e50" in namespace "secrets-863" to be "Succeeded or Failed"
Jun 23 09:30:24.472: INFO: Pod "pod-secrets-5ff69cd8-bfcf-4d23-968b-1ffed09b8e50": Phase="Pending", Reason="", readiness=false. Elapsed: 44.223103ms
Jun 23 09:30:26.516: INFO: Pod "pod-secrets-5ff69cd8-bfcf-4d23-968b-1ffed09b8e50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088747635s
Jun 23 09:30:28.561: INFO: Pod "pod-secrets-5ff69cd8-bfcf-4d23-968b-1ffed09b8e50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133430305s
Jun 23 09:30:30.605: INFO: Pod "pod-secrets-5ff69cd8-bfcf-4d23-968b-1ffed09b8e50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.177699714s
STEP: Saw pod success
Jun 23 09:30:30.605: INFO: Pod "pod-secrets-5ff69cd8-bfcf-4d23-968b-1ffed09b8e50" satisfied condition "Succeeded or Failed"
Jun 23 09:30:30.655: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-secrets-5ff69cd8-bfcf-4d23-968b-1ffed09b8e50 container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 09:30:30.754: INFO: Waiting for pod pod-secrets-5ff69cd8-bfcf-4d23-968b-1ffed09b8e50 to disappear
Jun 23 09:30:30.803: INFO: Pod pod-secrets-5ff69cd8-bfcf-4d23-968b-1ffed09b8e50 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:6.880 seconds]
[sig-storage] Secrets
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":90,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:30.912: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/framework/framework.go:188

... skipping 32 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:30:31.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7479" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":12,"skipped":93,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:31.670: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 62 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test downward API volume plugin
Jun 23 09:30:29.489: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d523eb08-996f-4efd-927f-87ea4e7db918" in namespace "projected-2686" to be "Succeeded or Failed"
Jun 23 09:30:29.535: INFO: Pod "downwardapi-volume-d523eb08-996f-4efd-927f-87ea4e7db918": Phase="Pending", Reason="", readiness=false. Elapsed: 45.347958ms
Jun 23 09:30:31.582: INFO: Pod "downwardapi-volume-d523eb08-996f-4efd-927f-87ea4e7db918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092875275s
Jun 23 09:30:33.633: INFO: Pod "downwardapi-volume-d523eb08-996f-4efd-927f-87ea4e7db918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.14333944s
STEP: Saw pod success
Jun 23 09:30:33.633: INFO: Pod "downwardapi-volume-d523eb08-996f-4efd-927f-87ea4e7db918" satisfied condition "Succeeded or Failed"
Jun 23 09:30:33.680: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod downwardapi-volume-d523eb08-996f-4efd-927f-87ea4e7db918 container client-container: <nil>
STEP: delete the pod
Jun 23 09:30:33.839: INFO: Waiting for pod downwardapi-volume-d523eb08-996f-4efd-927f-87ea4e7db918 to disappear
Jun 23 09:30:33.887: INFO: Pod downwardapi-volume-d523eb08-996f-4efd-927f-87ea4e7db918 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:188
Jun 23 09:30:33.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2686" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 136 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] provisioning
    test/e2e/storage/framework/testsuite.go:50
      should mount multiple PV pointing to the same storage on the same node
      test/e2e/storage/testsuites/provisioning.go:518
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node","total":-1,"completed":6,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:34.289: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 120 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl label
  test/e2e/kubectl/kubectl.go:1332
    should update the label on a resource  [Conformance]
    test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":10,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:34.750: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:188

... skipping 96 lines ...
• [SLOW TEST:6.714 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should run through the lifecycle of Pods and PodStatus [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":13,"skipped":99,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":9,"skipped":98,"failed":0}
[BeforeEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:30:31.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test downward api env vars
Jun 23 09:30:32.320: INFO: Waiting up to 5m0s for pod "downward-api-15b87e61-963a-4881-9ac3-4ce2377e6a50" in namespace "downward-api-3041" to be "Succeeded or Failed"
Jun 23 09:30:32.409: INFO: Pod "downward-api-15b87e61-963a-4881-9ac3-4ce2377e6a50": Phase="Pending", Reason="", readiness=false. Elapsed: 88.745942ms
Jun 23 09:30:34.495: INFO: Pod "downward-api-15b87e61-963a-4881-9ac3-4ce2377e6a50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175026145s
Jun 23 09:30:36.543: INFO: Pod "downward-api-15b87e61-963a-4881-9ac3-4ce2377e6a50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222784617s
Jun 23 09:30:38.592: INFO: Pod "downward-api-15b87e61-963a-4881-9ac3-4ce2377e6a50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.271867261s
STEP: Saw pod success
Jun 23 09:30:38.592: INFO: Pod "downward-api-15b87e61-963a-4881-9ac3-4ce2377e6a50" satisfied condition "Succeeded or Failed"
Jun 23 09:30:38.642: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod downward-api-15b87e61-963a-4881-9ac3-4ce2377e6a50 container dapi-container: <nil>
STEP: delete the pod
Jun 23 09:30:38.759: INFO: Waiting for pod downward-api-15b87e61-963a-4881-9ac3-4ce2377e6a50 to disappear
Jun 23 09:30:38.811: INFO: Pod downward-api-15b87e61-963a-4881-9ac3-4ce2377e6a50 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:7.056 seconds]
[sig-node] Downward API
test/e2e/common/node/framework.go:23
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":98,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 11 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:30:39.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3871" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:39.766: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
Jun 23 09:30:30.852: INFO: PersistentVolumeClaim pvc-52lkk found but phase is Pending instead of Bound.
Jun 23 09:30:32.909: INFO: PersistentVolumeClaim pvc-52lkk found and phase=Bound (14.403681762s)
Jun 23 09:30:32.909: INFO: Waiting up to 3m0s for PersistentVolume local-fdrbw to have phase Bound
Jun 23 09:30:32.956: INFO: PersistentVolume local-fdrbw found and phase=Bound (47.202712ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-87bt
STEP: Creating a pod to test subpath
Jun 23 09:30:33.101: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-87bt" in namespace "provisioning-504" to be "Succeeded or Failed"
Jun 23 09:30:33.150: INFO: Pod "pod-subpath-test-preprovisionedpv-87bt": Phase="Pending", Reason="", readiness=false. Elapsed: 49.127422ms
Jun 23 09:30:35.201: INFO: Pod "pod-subpath-test-preprovisionedpv-87bt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099558389s
Jun 23 09:30:37.249: INFO: Pod "pod-subpath-test-preprovisionedpv-87bt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147372866s
Jun 23 09:30:39.296: INFO: Pod "pod-subpath-test-preprovisionedpv-87bt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.194680532s
STEP: Saw pod success
Jun 23 09:30:39.296: INFO: Pod "pod-subpath-test-preprovisionedpv-87bt" satisfied condition "Succeeded or Failed"
Jun 23 09:30:39.343: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-subpath-test-preprovisionedpv-87bt container test-container-subpath-preprovisionedpv-87bt: <nil>
STEP: delete the pod
Jun 23 09:30:39.460: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-87bt to disappear
Jun 23 09:30:39.506: INFO: Pod pod-subpath-test-preprovisionedpv-87bt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-87bt
Jun 23 09:30:39.506: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-87bt" in namespace "provisioning-504"
... skipping 81 lines ...
• [SLOW TEST:18.337 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":10,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:40.392: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/framework/framework.go:188

... skipping 142 lines ...
• [SLOW TEST:7.907 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should list and delete a collection of ReplicaSets [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":6,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:41.947: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 87 lines ...
Jun 23 09:30:12.809: INFO: Unable to read jessie_udp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:12.858: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:12.913: INFO: Unable to read jessie_udp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:12.979: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:13.035: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:13.082: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:13.284: INFO: Lookups using dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9870 wheezy_tcp@dns-test-service.dns-9870 wheezy_udp@dns-test-service.dns-9870.svc wheezy_tcp@dns-test-service.dns-9870.svc wheezy_udp@_http._tcp.dns-test-service.dns-9870.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9870.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9870 jessie_tcp@dns-test-service.dns-9870 jessie_udp@dns-test-service.dns-9870.svc jessie_tcp@dns-test-service.dns-9870.svc jessie_udp@_http._tcp.dns-test-service.dns-9870.svc jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc]

Jun 23 09:30:18.343: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:18.405: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:18.459: INFO: Unable to read wheezy_udp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:18.523: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:18.575: INFO: Unable to read wheezy_udp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
... skipping 5 lines ...
Jun 23 09:30:19.059: INFO: Unable to read jessie_udp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:19.111: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:19.158: INFO: Unable to read jessie_udp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:19.215: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:19.264: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:19.309: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:19.499: INFO: Lookups using dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9870 wheezy_tcp@dns-test-service.dns-9870 wheezy_udp@dns-test-service.dns-9870.svc wheezy_tcp@dns-test-service.dns-9870.svc wheezy_udp@_http._tcp.dns-test-service.dns-9870.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9870.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9870 jessie_tcp@dns-test-service.dns-9870 jessie_udp@dns-test-service.dns-9870.svc jessie_tcp@dns-test-service.dns-9870.svc jessie_udp@_http._tcp.dns-test-service.dns-9870.svc jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc]

Jun 23 09:30:23.334: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:23.378: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:23.427: INFO: Unable to read wheezy_udp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:23.476: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:23.520: INFO: Unable to read wheezy_udp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
... skipping 5 lines ...
Jun 23 09:30:24.036: INFO: Unable to read jessie_udp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:24.117: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:24.163: INFO: Unable to read jessie_udp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:24.211: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:24.260: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:24.306: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:24.497: INFO: Lookups using dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9870 wheezy_tcp@dns-test-service.dns-9870 wheezy_udp@dns-test-service.dns-9870.svc wheezy_tcp@dns-test-service.dns-9870.svc wheezy_udp@_http._tcp.dns-test-service.dns-9870.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9870.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9870 jessie_tcp@dns-test-service.dns-9870 jessie_udp@dns-test-service.dns-9870.svc jessie_tcp@dns-test-service.dns-9870.svc jessie_udp@_http._tcp.dns-test-service.dns-9870.svc jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc]

Jun 23 09:30:28.333: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:28.378: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:28.423: INFO: Unable to read wheezy_udp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:28.469: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:28.514: INFO: Unable to read wheezy_udp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
... skipping 5 lines ...
Jun 23 09:30:28.981: INFO: Unable to read jessie_udp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:29.027: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:29.071: INFO: Unable to read jessie_udp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:29.116: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:29.161: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:29.209: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:29.397: INFO: Lookups using dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9870 wheezy_tcp@dns-test-service.dns-9870 wheezy_udp@dns-test-service.dns-9870.svc wheezy_tcp@dns-test-service.dns-9870.svc wheezy_udp@_http._tcp.dns-test-service.dns-9870.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9870.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9870 jessie_tcp@dns-test-service.dns-9870 jessie_udp@dns-test-service.dns-9870.svc jessie_tcp@dns-test-service.dns-9870.svc jessie_udp@_http._tcp.dns-test-service.dns-9870.svc jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc]

Jun 23 09:30:33.330: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:33.373: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:33.420: INFO: Unable to read wheezy_udp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:33.470: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:33.515: INFO: Unable to read wheezy_udp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
... skipping 5 lines ...
Jun 23 09:30:34.076: INFO: Unable to read jessie_udp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:34.121: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:34.173: INFO: Unable to read jessie_udp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:34.229: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:34.275: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:34.337: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:34.622: INFO: Lookups using dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9870 wheezy_tcp@dns-test-service.dns-9870 wheezy_udp@dns-test-service.dns-9870.svc wheezy_tcp@dns-test-service.dns-9870.svc wheezy_udp@_http._tcp.dns-test-service.dns-9870.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9870.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9870 jessie_tcp@dns-test-service.dns-9870 jessie_udp@dns-test-service.dns-9870.svc jessie_tcp@dns-test-service.dns-9870.svc jessie_udp@_http._tcp.dns-test-service.dns-9870.svc jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc]

Jun 23 09:30:38.331: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:38.376: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:38.422: INFO: Unable to read wheezy_udp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:38.467: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:38.513: INFO: Unable to read wheezy_udp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
... skipping 5 lines ...
Jun 23 09:30:38.975: INFO: Unable to read jessie_udp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:39.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870 from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:39.072: INFO: Unable to read jessie_udp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:39.118: INFO: Unable to read jessie_tcp@dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:39.164: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:39.212: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc from pod dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d: the server could not find the requested resource (get pods dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d)
Jun 23 09:30:39.398: INFO: Lookups using dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9870 wheezy_tcp@dns-test-service.dns-9870 wheezy_udp@dns-test-service.dns-9870.svc wheezy_tcp@dns-test-service.dns-9870.svc wheezy_udp@_http._tcp.dns-test-service.dns-9870.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9870.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9870 jessie_tcp@dns-test-service.dns-9870 jessie_udp@dns-test-service.dns-9870.svc jessie_tcp@dns-test-service.dns-9870.svc jessie_udp@_http._tcp.dns-test-service.dns-9870.svc jessie_tcp@_http._tcp.dns-test-service.dns-9870.svc]

Jun 23 09:30:44.481: INFO: DNS probes using dns-9870/dns-test-7c4e656d-53b0-4a38-8617-746c23092d0d succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:36.745 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":79,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:44.782: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 162 lines ...
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 23 09:30:34.866: INFO: Waiting up to 5m0s for pod "security-context-54da11f9-2da9-4d3e-8997-48f400ef749f" in namespace "security-context-6168" to be "Succeeded or Failed"
Jun 23 09:30:34.929: INFO: Pod "security-context-54da11f9-2da9-4d3e-8997-48f400ef749f": Phase="Pending", Reason="", readiness=false. Elapsed: 63.21663ms
Jun 23 09:30:36.975: INFO: Pod "security-context-54da11f9-2da9-4d3e-8997-48f400ef749f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108750173s
Jun 23 09:30:39.022: INFO: Pod "security-context-54da11f9-2da9-4d3e-8997-48f400ef749f": Phase="Running", Reason="", readiness=false. Elapsed: 4.156340688s
Jun 23 09:30:41.069: INFO: Pod "security-context-54da11f9-2da9-4d3e-8997-48f400ef749f": Phase="Running", Reason="", readiness=false. Elapsed: 6.202623665s
Jun 23 09:30:43.115: INFO: Pod "security-context-54da11f9-2da9-4d3e-8997-48f400ef749f": Phase="Running", Reason="", readiness=false. Elapsed: 8.248966355s
Jun 23 09:30:45.162: INFO: Pod "security-context-54da11f9-2da9-4d3e-8997-48f400ef749f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.296002146s
STEP: Saw pod success
Jun 23 09:30:45.162: INFO: Pod "security-context-54da11f9-2da9-4d3e-8997-48f400ef749f" satisfied condition "Succeeded or Failed"
Jun 23 09:30:45.209: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod security-context-54da11f9-2da9-4d3e-8997-48f400ef749f container test-container: <nil>
STEP: delete the pod
Jun 23 09:30:45.323: INFO: Waiting for pod security-context-54da11f9-2da9-4d3e-8997-48f400ef749f to disappear
Jun 23 09:30:45.381: INFO: Pod security-context-54da11f9-2da9-4d3e-8997-48f400ef749f no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:11.107 seconds]
[sig-node] Security Context
test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:45.503: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 42 lines ...
Jun 23 09:30:15.362: INFO: PersistentVolumeClaim pvc-j26bp found but phase is Pending instead of Bound.
Jun 23 09:30:17.410: INFO: PersistentVolumeClaim pvc-j26bp found and phase=Bound (2.097336482s)
Jun 23 09:30:17.410: INFO: Waiting up to 3m0s for PersistentVolume local-c4pnn to have phase Bound
Jun 23 09:30:17.454: INFO: PersistentVolume local-c4pnn found and phase=Bound (44.140529ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-swb6
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 09:30:17.592: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-swb6" in namespace "provisioning-8718" to be "Succeeded or Failed"
Jun 23 09:30:17.642: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6": Phase="Pending", Reason="", readiness=false. Elapsed: 50.578677ms
Jun 23 09:30:19.687: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095653498s
Jun 23 09:30:21.733: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6": Phase="Running", Reason="", readiness=true. Elapsed: 4.141506296s
Jun 23 09:30:23.780: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6": Phase="Running", Reason="", readiness=true. Elapsed: 6.188399984s
Jun 23 09:30:25.826: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6": Phase="Running", Reason="", readiness=true. Elapsed: 8.234082316s
Jun 23 09:30:27.870: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6": Phase="Running", Reason="", readiness=true. Elapsed: 10.278203379s
... skipping 3 lines ...
Jun 23 09:30:36.121: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6": Phase="Running", Reason="", readiness=true. Elapsed: 18.529182326s
Jun 23 09:30:38.168: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6": Phase="Running", Reason="", readiness=true. Elapsed: 20.575956289s
Jun 23 09:30:40.214: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6": Phase="Running", Reason="", readiness=true. Elapsed: 22.622034841s
Jun 23 09:30:42.261: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6": Phase="Running", Reason="", readiness=false. Elapsed: 24.668803119s
Jun 23 09:30:44.310: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.717708857s
STEP: Saw pod success
Jun 23 09:30:44.310: INFO: Pod "pod-subpath-test-preprovisionedpv-swb6" satisfied condition "Succeeded or Failed"
Jun 23 09:30:44.368: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-subpath-test-preprovisionedpv-swb6 container test-container-subpath-preprovisionedpv-swb6: <nil>
STEP: delete the pod
Jun 23 09:30:44.500: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-swb6 to disappear
Jun 23 09:30:44.557: INFO: Pod pod-subpath-test-preprovisionedpv-swb6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-swb6
Jun 23 09:30:44.557: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-swb6" in namespace "provisioning-8718"
... skipping 26 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:232
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:45.695: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 138 lines ...
Jun 23 09:30:28.997: INFO: PersistentVolumeClaim pvc-r8g8c found but phase is Pending instead of Bound.
Jun 23 09:30:31.049: INFO: PersistentVolumeClaim pvc-r8g8c found and phase=Bound (14.374069269s)
Jun 23 09:30:31.049: INFO: Waiting up to 3m0s for PersistentVolume local-jvcxv to have phase Bound
Jun 23 09:30:31.093: INFO: PersistentVolume local-jvcxv found and phase=Bound (43.153225ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qkvc
STEP: Creating a pod to test subpath
Jun 23 09:30:31.228: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qkvc" in namespace "provisioning-4597" to be "Succeeded or Failed"
Jun 23 09:30:31.272: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc": Phase="Pending", Reason="", readiness=false. Elapsed: 43.483741ms
Jun 23 09:30:33.318: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089990337s
Jun 23 09:30:35.364: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135176462s
Jun 23 09:30:37.409: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180616285s
Jun 23 09:30:39.463: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.234807655s
STEP: Saw pod success
Jun 23 09:30:39.463: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc" satisfied condition "Succeeded or Failed"
Jun 23 09:30:39.508: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-preprovisionedpv-qkvc container test-container-subpath-preprovisionedpv-qkvc: <nil>
STEP: delete the pod
Jun 23 09:30:39.624: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qkvc to disappear
Jun 23 09:30:39.670: INFO: Pod pod-subpath-test-preprovisionedpv-qkvc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qkvc
Jun 23 09:30:39.670: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qkvc" in namespace "provisioning-4597"
STEP: Creating pod pod-subpath-test-preprovisionedpv-qkvc
STEP: Creating a pod to test subpath
Jun 23 09:30:39.763: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qkvc" in namespace "provisioning-4597" to be "Succeeded or Failed"
Jun 23 09:30:39.806: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.653859ms
Jun 23 09:30:41.849: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0859524s
Jun 23 09:30:43.897: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133068532s
Jun 23 09:30:45.941: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177812796s
Jun 23 09:30:47.985: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.222011919s
STEP: Saw pod success
Jun 23 09:30:47.986: INFO: Pod "pod-subpath-test-preprovisionedpv-qkvc" satisfied condition "Succeeded or Failed"
Jun 23 09:30:48.029: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-preprovisionedpv-qkvc container test-container-subpath-preprovisionedpv-qkvc: <nil>
STEP: delete the pod
Jun 23 09:30:48.126: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qkvc to disappear
Jun 23 09:30:48.173: INFO: Pod pod-subpath-test-preprovisionedpv-qkvc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qkvc
Jun 23 09:30:48.173: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qkvc" in namespace "provisioning-4597"
... skipping 26 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      test/e2e/storage/testsuites/subpath.go:397
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":15,"skipped":110,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:49.303: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
  test/e2e/storage/subpath.go:40
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating pod pod-subpath-test-configmap-fx9s
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 09:30:19.089: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fx9s" in namespace "subpath-8784" to be "Succeeded or Failed"
Jun 23 09:30:19.135: INFO: Pod "pod-subpath-test-configmap-fx9s": Phase="Pending", Reason="", readiness=false. Elapsed: 45.785229ms
Jun 23 09:30:21.187: INFO: Pod "pod-subpath-test-configmap-fx9s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097671368s
Jun 23 09:30:23.236: INFO: Pod "pod-subpath-test-configmap-fx9s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146940512s
Jun 23 09:30:25.284: INFO: Pod "pod-subpath-test-configmap-fx9s": Phase="Running", Reason="", readiness=true. Elapsed: 6.195404434s
Jun 23 09:30:27.332: INFO: Pod "pod-subpath-test-configmap-fx9s": Phase="Running", Reason="", readiness=true. Elapsed: 8.24265102s
Jun 23 09:30:29.383: INFO: Pod "pod-subpath-test-configmap-fx9s": Phase="Running", Reason="", readiness=true. Elapsed: 10.294124412s
... skipping 5 lines ...
Jun 23 09:30:41.682: INFO: Pod "pod-subpath-test-configmap-fx9s": Phase="Running", Reason="", readiness=true. Elapsed: 22.592524807s
Jun 23 09:30:43.730: INFO: Pod "pod-subpath-test-configmap-fx9s": Phase="Running", Reason="", readiness=true. Elapsed: 24.64071066s
Jun 23 09:30:45.782: INFO: Pod "pod-subpath-test-configmap-fx9s": Phase="Running", Reason="", readiness=true. Elapsed: 26.69331626s
Jun 23 09:30:47.830: INFO: Pod "pod-subpath-test-configmap-fx9s": Phase="Running", Reason="", readiness=true. Elapsed: 28.740913971s
Jun 23 09:30:49.878: INFO: Pod "pod-subpath-test-configmap-fx9s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.788670054s
STEP: Saw pod success
Jun 23 09:30:49.878: INFO: Pod "pod-subpath-test-configmap-fx9s" satisfied condition "Succeeded or Failed"
Jun 23 09:30:49.925: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-subpath-test-configmap-fx9s container test-container-subpath-configmap-fx9s: <nil>
STEP: delete the pod
Jun 23 09:30:50.034: INFO: Waiting for pod pod-subpath-test-configmap-fx9s to disappear
Jun 23 09:30:50.080: INFO: Pod pod-subpath-test-configmap-fx9s no longer exists
STEP: Deleting pod pod-subpath-test-configmap-fx9s
Jun 23 09:30:50.080: INFO: Deleting pod "pod-subpath-test-configmap-fx9s" in namespace "subpath-8784"
... skipping 8 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:36
    should support subpaths with configmap pod [Conformance]
    test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating projection with secret that has name projected-secret-test-map-56792509-fa9f-492e-ad00-004f2397a769
STEP: Creating a pod to test consume secrets
Jun 23 09:30:40.274: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8b88e4a0-3f11-4f7d-8505-6ca2919c170a" in namespace "projected-6996" to be "Succeeded or Failed"
Jun 23 09:30:40.320: INFO: Pod "pod-projected-secrets-8b88e4a0-3f11-4f7d-8505-6ca2919c170a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.746092ms
Jun 23 09:30:42.367: INFO: Pod "pod-projected-secrets-8b88e4a0-3f11-4f7d-8505-6ca2919c170a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092906244s
Jun 23 09:30:44.414: INFO: Pod "pod-projected-secrets-8b88e4a0-3f11-4f7d-8505-6ca2919c170a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140411833s
Jun 23 09:30:46.461: INFO: Pod "pod-projected-secrets-8b88e4a0-3f11-4f7d-8505-6ca2919c170a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187226136s
Jun 23 09:30:48.509: INFO: Pod "pod-projected-secrets-8b88e4a0-3f11-4f7d-8505-6ca2919c170a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.235288997s
Jun 23 09:30:50.556: INFO: Pod "pod-projected-secrets-8b88e4a0-3f11-4f7d-8505-6ca2919c170a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.282198084s
STEP: Saw pod success
Jun 23 09:30:50.556: INFO: Pod "pod-projected-secrets-8b88e4a0-3f11-4f7d-8505-6ca2919c170a" satisfied condition "Succeeded or Failed"
Jun 23 09:30:50.602: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-projected-secrets-8b88e4a0-3f11-4f7d-8505-6ca2919c170a container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 09:30:50.728: INFO: Waiting for pod pod-projected-secrets-8b88e4a0-3f11-4f7d-8505-6ca2919c170a to disappear
Jun 23 09:30:50.773: INFO: Pod pod-projected-secrets-8b88e4a0-3f11-4f7d-8505-6ca2919c170a no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:11.070 seconds]
[sig-storage] Projected secret
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:50.896: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 32 lines ...
      Only supported for providers [aws] (not gce)

      test/e2e/storage/drivers/in_tree.go:1720
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":84,"failed":0}
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:30:17.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 80 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:30:54.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-959" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":13,"skipped":96,"failed":0}
[BeforeEach] [sig-api-machinery] client-go should negotiate
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:30:54.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename protocol
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:30:54.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "protocol-5774" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":14,"skipped":96,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:54.768: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":33,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:30:40.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 196 lines ...
test/e2e/kubectl/framework.go:23
  Guestbook application
  test/e2e/kubectl/kubectl.go:340
    should create and stop a working application  [Conformance]
    test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":6,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:30:54.795: INFO: >>> kubeConfig: /root/.kube/config
... skipping 662 lines ...
test/e2e/common/node/framework.go:23
  NodeLease
  test/e2e/common/node/node_lease.go:51
    the kubelet should report node status infrequently
    test/e2e/common/node/node_lease.go:114
------------------------------
{"msg":"PASSED [sig-node] NodeLease NodeLease the kubelet should report node status infrequently","total":-1,"completed":2,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:55.977: INFO: Only supported for providers [azure] (not gce)
... skipping 43 lines ...
• [SLOW TEST:19.715 seconds]
[sig-apps] ReplicationController
test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a private image
  test/e2e/apps/rc.go:70
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a private image","total":-1,"completed":14,"skipped":103,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 29 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:447
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":8,"skipped":83,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Jun 23 09:30:29.359: INFO: PersistentVolumeClaim pvc-txgb6 found but phase is Pending instead of Bound.
Jun 23 09:30:31.417: INFO: PersistentVolumeClaim pvc-txgb6 found and phase=Bound (10.311954743s)
Jun 23 09:30:31.417: INFO: Waiting up to 3m0s for PersistentVolume local-4rz5q to have phase Bound
Jun 23 09:30:31.468: INFO: PersistentVolume local-4rz5q found and phase=Bound (51.306143ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7xgm
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 09:30:31.626: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7xgm" in namespace "provisioning-3786" to be "Succeeded or Failed"
Jun 23 09:30:31.676: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm": Phase="Pending", Reason="", readiness=false. Elapsed: 49.902288ms
Jun 23 09:30:33.727: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101337682s
Jun 23 09:30:35.780: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm": Phase="Running", Reason="", readiness=true. Elapsed: 4.153553583s
Jun 23 09:30:37.831: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm": Phase="Running", Reason="", readiness=true. Elapsed: 6.20450778s
Jun 23 09:30:39.881: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm": Phase="Running", Reason="", readiness=true. Elapsed: 8.255212065s
Jun 23 09:30:41.936: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm": Phase="Running", Reason="", readiness=true. Elapsed: 10.309752775s
... skipping 3 lines ...
Jun 23 09:30:50.150: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm": Phase="Running", Reason="", readiness=true. Elapsed: 18.524031236s
Jun 23 09:30:52.200: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm": Phase="Running", Reason="", readiness=true. Elapsed: 20.573664085s
Jun 23 09:30:54.250: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm": Phase="Running", Reason="", readiness=true. Elapsed: 22.624284934s
Jun 23 09:30:56.301: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm": Phase="Running", Reason="", readiness=false. Elapsed: 24.675325674s
Jun 23 09:30:58.352: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.725397347s
STEP: Saw pod success
Jun 23 09:30:58.352: INFO: Pod "pod-subpath-test-preprovisionedpv-7xgm" satisfied condition "Succeeded or Failed"
Jun 23 09:30:58.402: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-subpath-test-preprovisionedpv-7xgm container test-container-subpath-preprovisionedpv-7xgm: <nil>
STEP: delete the pod
Jun 23 09:30:58.511: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7xgm to disappear
Jun 23 09:30:58.559: INFO: Pod pod-subpath-test-preprovisionedpv-7xgm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7xgm
Jun 23 09:30:58.560: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7xgm" in namespace "provisioning-3786"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:232
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:30:59.286: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 67 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun 23 09:30:49.668: INFO: Waiting up to 5m0s for pod "pod-efe2ee3e-9f0d-45b3-a31b-01f3fd840343" in namespace "emptydir-5831" to be "Succeeded or Failed"
Jun 23 09:30:49.710: INFO: Pod "pod-efe2ee3e-9f0d-45b3-a31b-01f3fd840343": Phase="Pending", Reason="", readiness=false. Elapsed: 42.66743ms
Jun 23 09:30:51.753: INFO: Pod "pod-efe2ee3e-9f0d-45b3-a31b-01f3fd840343": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085618545s
Jun 23 09:30:53.797: INFO: Pod "pod-efe2ee3e-9f0d-45b3-a31b-01f3fd840343": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129509766s
Jun 23 09:30:55.840: INFO: Pod "pod-efe2ee3e-9f0d-45b3-a31b-01f3fd840343": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172675511s
Jun 23 09:30:57.885: INFO: Pod "pod-efe2ee3e-9f0d-45b3-a31b-01f3fd840343": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217521105s
Jun 23 09:30:59.930: INFO: Pod "pod-efe2ee3e-9f0d-45b3-a31b-01f3fd840343": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.262268996s
STEP: Saw pod success
Jun 23 09:30:59.930: INFO: Pod "pod-efe2ee3e-9f0d-45b3-a31b-01f3fd840343" satisfied condition "Succeeded or Failed"
Jun 23 09:30:59.977: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-efe2ee3e-9f0d-45b3-a31b-01f3fd840343 container test-container: <nil>
STEP: delete the pod
Jun 23 09:31:00.082: INFO: Waiting for pod pod-efe2ee3e-9f0d-45b3-a31b-01f3fd840343 to disappear
Jun 23 09:31:00.124: INFO: Pod pod-efe2ee3e-9f0d-45b3-a31b-01f3fd840343 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:10.926 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":113,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:00.292: INFO: Only supported for providers [vsphere] (not gce)
... skipping 98 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating configMap with name projected-configmap-test-volume-ad559335-ac71-4c0b-9f61-1f3d17393a6a
STEP: Creating a pod to test consume configMaps
Jun 23 09:30:58.568: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2112f850-c4c7-4e38-bcda-574bffb1fc42" in namespace "projected-7869" to be "Succeeded or Failed"
Jun 23 09:30:58.612: INFO: Pod "pod-projected-configmaps-2112f850-c4c7-4e38-bcda-574bffb1fc42": Phase="Pending", Reason="", readiness=false. Elapsed: 44.033057ms
Jun 23 09:31:00.657: INFO: Pod "pod-projected-configmaps-2112f850-c4c7-4e38-bcda-574bffb1fc42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088775978s
Jun 23 09:31:02.702: INFO: Pod "pod-projected-configmaps-2112f850-c4c7-4e38-bcda-574bffb1fc42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133706883s
STEP: Saw pod success
Jun 23 09:31:02.702: INFO: Pod "pod-projected-configmaps-2112f850-c4c7-4e38-bcda-574bffb1fc42" satisfied condition "Succeeded or Failed"
Jun 23 09:31:02.762: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-projected-configmaps-2112f850-c4c7-4e38-bcda-574bffb1fc42 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 09:31:02.881: INFO: Waiting for pod pod-projected-configmaps-2112f850-c4c7-4e38-bcda-574bffb1fc42 to disappear
Jun 23 09:31:02.925: INFO: Pod pod-projected-configmaps-2112f850-c4c7-4e38-bcda-574bffb1fc42 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:188
Jun 23 09:31:02.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7869" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":104,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 35 lines ...
  test/e2e/kubectl/portforward.go:454
    that expects a client request
    test/e2e/kubectl/portforward.go:455
      should support a client that connects, sends NO DATA, and disconnects
      test/e2e/kubectl/portforward.go:456
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":6,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:03.429: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
... skipping 62 lines ...
Jun 23 09:30:44.237: INFO: PersistentVolumeClaim pvc-2rq5j found but phase is Pending instead of Bound.
Jun 23 09:30:46.284: INFO: PersistentVolumeClaim pvc-2rq5j found and phase=Bound (2.093124422s)
Jun 23 09:30:46.285: INFO: Waiting up to 3m0s for PersistentVolume local-6jn5t to have phase Bound
Jun 23 09:30:46.331: INFO: PersistentVolume local-6jn5t found and phase=Bound (46.259001ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bpw6
STEP: Creating a pod to test subpath
Jun 23 09:30:46.475: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bpw6" in namespace "provisioning-4061" to be "Succeeded or Failed"
Jun 23 09:30:46.522: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6": Phase="Pending", Reason="", readiness=false. Elapsed: 46.562511ms
Jun 23 09:30:48.572: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096156365s
Jun 23 09:30:50.622: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146719819s
Jun 23 09:30:52.668: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192848598s
Jun 23 09:30:54.718: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241995458s
Jun 23 09:30:56.766: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.29032673s
STEP: Saw pod success
Jun 23 09:30:56.766: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6" satisfied condition "Succeeded or Failed"
Jun 23 09:30:56.814: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-preprovisionedpv-bpw6 container test-container-subpath-preprovisionedpv-bpw6: <nil>
STEP: delete the pod
Jun 23 09:30:56.932: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bpw6 to disappear
Jun 23 09:30:56.979: INFO: Pod pod-subpath-test-preprovisionedpv-bpw6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bpw6
Jun 23 09:30:56.979: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bpw6" in namespace "provisioning-4061"
STEP: Creating pod pod-subpath-test-preprovisionedpv-bpw6
STEP: Creating a pod to test subpath
Jun 23 09:30:57.078: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bpw6" in namespace "provisioning-4061" to be "Succeeded or Failed"
Jun 23 09:30:57.125: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6": Phase="Pending", Reason="", readiness=false. Elapsed: 46.992445ms
Jun 23 09:30:59.181: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102624745s
Jun 23 09:31:01.232: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154020453s
Jun 23 09:31:03.284: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205566704s
STEP: Saw pod success
Jun 23 09:31:03.284: INFO: Pod "pod-subpath-test-preprovisionedpv-bpw6" satisfied condition "Succeeded or Failed"
Jun 23 09:31:03.331: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-preprovisionedpv-bpw6 container test-container-subpath-preprovisionedpv-bpw6: <nil>
STEP: delete the pod
Jun 23 09:31:03.492: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bpw6 to disappear
Jun 23 09:31:03.545: INFO: Pod pod-subpath-test-preprovisionedpv-bpw6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bpw6
Jun 23 09:31:03.545: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bpw6" in namespace "provisioning-4061"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      test/e2e/storage/testsuites/subpath.go:397
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":11,"skipped":99,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:04.323: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 14 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      test/e2e/storage/drivers/in_tree.go:263
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":6,"skipped":56,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:29:42.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 43 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    should implement legacy replacement when the update strategy is OnDelete
    test/e2e/apps/statefulset.go:507
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":7,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:04.883: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/framework/framework.go:188

... skipping 43 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating configMap with name configmap-test-volume-bc13e2b7-dcca-4207-b6cb-0d72814d451a
STEP: Creating a pod to test consume configMaps
Jun 23 09:30:56.389: INFO: Waiting up to 5m0s for pod "pod-configmaps-dba291b9-56f2-4a2b-bc5a-c703b89e152e" in namespace "configmap-6563" to be "Succeeded or Failed"
Jun 23 09:30:56.432: INFO: Pod "pod-configmaps-dba291b9-56f2-4a2b-bc5a-c703b89e152e": Phase="Pending", Reason="", readiness=false. Elapsed: 42.850876ms
Jun 23 09:30:58.476: INFO: Pod "pod-configmaps-dba291b9-56f2-4a2b-bc5a-c703b89e152e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086940849s
Jun 23 09:31:00.519: INFO: Pod "pod-configmaps-dba291b9-56f2-4a2b-bc5a-c703b89e152e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13060532s
Jun 23 09:31:02.563: INFO: Pod "pod-configmaps-dba291b9-56f2-4a2b-bc5a-c703b89e152e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173825014s
Jun 23 09:31:04.607: INFO: Pod "pod-configmaps-dba291b9-56f2-4a2b-bc5a-c703b89e152e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.218518295s
STEP: Saw pod success
Jun 23 09:31:04.607: INFO: Pod "pod-configmaps-dba291b9-56f2-4a2b-bc5a-c703b89e152e" satisfied condition "Succeeded or Failed"
Jun 23 09:31:04.650: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-configmaps-dba291b9-56f2-4a2b-bc5a-c703b89e152e container agnhost-container: <nil>
STEP: delete the pod
Jun 23 09:31:04.760: INFO: Waiting for pod pod-configmaps-dba291b9-56f2-4a2b-bc5a-c703b89e152e to disappear
Jun 23 09:31:04.807: INFO: Pod pod-configmaps-dba291b9-56f2-4a2b-bc5a-c703b89e152e no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:188
... skipping 46 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:31:04.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "protocol-8209" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":12,"skipped":106,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:04.940: INFO: Only supported for providers [vsphere] (not gce)
... skipping 24 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test downward API volume plugin
Jun 23 09:30:59.309: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8a981b0-f9c4-41fa-a19b-28c213128419" in namespace "projected-1189" to be "Succeeded or Failed"
Jun 23 09:30:59.373: INFO: Pod "downwardapi-volume-d8a981b0-f9c4-41fa-a19b-28c213128419": Phase="Pending", Reason="", readiness=false. Elapsed: 63.137889ms
Jun 23 09:31:01.422: INFO: Pod "downwardapi-volume-d8a981b0-f9c4-41fa-a19b-28c213128419": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11227078s
Jun 23 09:31:03.478: INFO: Pod "downwardapi-volume-d8a981b0-f9c4-41fa-a19b-28c213128419": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168804991s
Jun 23 09:31:05.525: INFO: Pod "downwardapi-volume-d8a981b0-f9c4-41fa-a19b-28c213128419": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.215501989s
STEP: Saw pod success
Jun 23 09:31:05.525: INFO: Pod "downwardapi-volume-d8a981b0-f9c4-41fa-a19b-28c213128419" satisfied condition "Succeeded or Failed"
Jun 23 09:31:05.575: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod downwardapi-volume-d8a981b0-f9c4-41fa-a19b-28c213128419 container client-container: <nil>
STEP: delete the pod
Jun 23 09:31:05.697: INFO: Waiting for pod downwardapi-volume-d8a981b0-f9c4-41fa-a19b-28c213128419 to disappear
Jun 23 09:31:05.751: INFO: Pod downwardapi-volume-d8a981b0-f9c4-41fa-a19b-28c213128419 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:6.922 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":88,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:05.881: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 55 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:31:06.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6073" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":31,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:06.196: INFO: Only supported for providers [azure] (not gce)
... skipping 12 lines ...
      test/e2e/storage/testsuites/volumemode.go:354

      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:1576
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":8,"skipped":111,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:31:01.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating projection with secret that has name projected-secret-test-66e02cbd-b76e-4649-9d3a-43d907f85336
STEP: Creating a pod to test consume secrets
Jun 23 09:31:01.868: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5c06efcc-f550-4e28-99d8-f866bebe1b41" in namespace "projected-7022" to be "Succeeded or Failed"
Jun 23 09:31:01.914: INFO: Pod "pod-projected-secrets-5c06efcc-f550-4e28-99d8-f866bebe1b41": Phase="Pending", Reason="", readiness=false. Elapsed: 45.139408ms
Jun 23 09:31:03.961: INFO: Pod "pod-projected-secrets-5c06efcc-f550-4e28-99d8-f866bebe1b41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092826348s
Jun 23 09:31:06.006: INFO: Pod "pod-projected-secrets-5c06efcc-f550-4e28-99d8-f866bebe1b41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137690687s
Jun 23 09:31:08.055: INFO: Pod "pod-projected-secrets-5c06efcc-f550-4e28-99d8-f866bebe1b41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.186232578s
STEP: Saw pod success
Jun 23 09:31:08.055: INFO: Pod "pod-projected-secrets-5c06efcc-f550-4e28-99d8-f866bebe1b41" satisfied condition "Succeeded or Failed"
Jun 23 09:31:08.098: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-projected-secrets-5c06efcc-f550-4e28-99d8-f866bebe1b41 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 09:31:08.202: INFO: Waiting for pod pod-projected-secrets-5c06efcc-f550-4e28-99d8-f866bebe1b41 to disappear
Jun 23 09:31:08.245: INFO: Pod pod-projected-secrets-5c06efcc-f550-4e28-99d8-f866bebe1b41 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:6.892 seconds]
[sig-storage] Projected secret
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":111,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:08.403: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 84 lines ...
Jun 23 09:31:00.502: INFO: PersistentVolumeClaim pvc-jsd2k found but phase is Pending instead of Bound.
Jun 23 09:31:02.550: INFO: PersistentVolumeClaim pvc-jsd2k found and phase=Bound (4.141374886s)
Jun 23 09:31:02.550: INFO: Waiting up to 3m0s for PersistentVolume local-tj4hz to have phase Bound
Jun 23 09:31:02.596: INFO: PersistentVolume local-tj4hz found and phase=Bound (46.257452ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-skr5
STEP: Creating a pod to test subpath
Jun 23 09:31:02.738: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-skr5" in namespace "provisioning-3941" to be "Succeeded or Failed"
Jun 23 09:31:02.788: INFO: Pod "pod-subpath-test-preprovisionedpv-skr5": Phase="Pending", Reason="", readiness=false. Elapsed: 49.663917ms
Jun 23 09:31:04.847: INFO: Pod "pod-subpath-test-preprovisionedpv-skr5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109556116s
Jun 23 09:31:06.896: INFO: Pod "pod-subpath-test-preprovisionedpv-skr5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158197212s
Jun 23 09:31:08.944: INFO: Pod "pod-subpath-test-preprovisionedpv-skr5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206062552s
STEP: Saw pod success
Jun 23 09:31:08.944: INFO: Pod "pod-subpath-test-preprovisionedpv-skr5" satisfied condition "Succeeded or Failed"
Jun 23 09:31:08.994: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-subpath-test-preprovisionedpv-skr5 container test-container-subpath-preprovisionedpv-skr5: <nil>
STEP: delete the pod
Jun 23 09:31:09.119: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-skr5 to disappear
Jun 23 09:31:09.165: INFO: Pod pod-subpath-test-preprovisionedpv-skr5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-skr5
Jun 23 09:31:09.166: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-skr5" in namespace "provisioning-3941"
... skipping 40 lines ...
Jun 23 09:31:09.055: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 23 09:31:09.055: INFO: stdout: "etcd-0 controller-manager scheduler etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of etcd-0
Jun 23 09:31:09.055: INFO: Running '/logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=kubectl-8096 get componentstatuses etcd-0'
Jun 23 09:31:09.283: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 23 09:31:09.283: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-0   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
STEP: getting status of controller-manager
Jun 23 09:31:09.283: INFO: Running '/logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=kubectl-8096 get componentstatuses controller-manager'
Jun 23 09:31:09.545: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 23 09:31:09.545: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of scheduler
Jun 23 09:31:09.545: INFO: Running '/logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=kubectl-8096 get componentstatuses scheduler'
Jun 23 09:31:09.753: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 23 09:31:09.753: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of etcd-1
Jun 23 09:31:09.753: INFO: Running '/logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=kubectl-8096 get componentstatuses etcd-1'
Jun 23 09:31:09.964: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 23 09:31:09.964: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-1   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:188
Jun 23 09:31:09.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8096" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":68,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":10,"skipped":128,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:10.113: INFO: Driver local doesn't support ext3 -- skipping
... skipping 522 lines ...
• [SLOW TEST:73.197 seconds]
[sig-network] Conntrack
test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries [Privileged]
  test/e2e/network/conntrack.go:363
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]","total":-1,"completed":7,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:10.331: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/framework/framework.go:188

... skipping 153 lines ...
• [SLOW TEST:5.895 seconds]
[sig-api-machinery] Watchers
test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":13,"skipped":109,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:10.859: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/framework/framework.go:188

... skipping 27 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:31:11.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-8216" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":14,"skipped":111,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:11.405: INFO: Only supported for providers [openstack] (not gce)
... skipping 142 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:31:12.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-4163" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":8,"skipped":78,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:12.271: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 99 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:31:13.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3289" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":9,"skipped":94,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 14 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:31:13.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2897" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":10,"skipped":99,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] PreStop
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:28.753 seconds]
[sig-node] PreStop
test/e2e/node/framework.go:23
  graceful pod terminated should wait until preStop hook completes the process
  test/e2e/node/pre_stop.go:172
------------------------------
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":4,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:14.569: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun 23 09:31:10.493: INFO: Waiting up to 5m0s for pod "pod-31adc490-acf5-4a5f-811a-a8cc9c30ba79" in namespace "emptydir-6473" to be "Succeeded or Failed"
Jun 23 09:31:10.542: INFO: Pod "pod-31adc490-acf5-4a5f-811a-a8cc9c30ba79": Phase="Pending", Reason="", readiness=false. Elapsed: 48.13205ms
Jun 23 09:31:12.589: INFO: Pod "pod-31adc490-acf5-4a5f-811a-a8cc9c30ba79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095082628s
Jun 23 09:31:14.634: INFO: Pod "pod-31adc490-acf5-4a5f-811a-a8cc9c30ba79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140109105s
STEP: Saw pod success
Jun 23 09:31:14.634: INFO: Pod "pod-31adc490-acf5-4a5f-811a-a8cc9c30ba79" satisfied condition "Succeeded or Failed"
Jun 23 09:31:14.678: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-31adc490-acf5-4a5f-811a-a8cc9c30ba79 container test-container: <nil>
STEP: delete the pod
Jun 23 09:31:14.783: INFO: Waiting for pod pod-31adc490-acf5-4a5f-811a-a8cc9c30ba79 to disappear
Jun 23 09:31:14.827: INFO: Pod pod-31adc490-acf5-4a5f-811a-a8cc9c30ba79 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:188
Jun 23 09:31:14.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6473" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":131,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:14.948: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 131 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:31:16.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6278" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":12,"skipped":135,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:16.684: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 130 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":8,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 137 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    test/e2e/storage/framework/testsuite.go:50
      should support two pods which have the same volume definition
      test/e2e/storage/testsuites/ephemeral.go:216
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":3,"skipped":16,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:19.706: INFO: Only supported for providers [vsphere] (not gce)
... skipping 163 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun 23 09:31:14.954: INFO: Waiting up to 5m0s for pod "pod-9ba2662c-cb30-4415-8703-2f523fb4f0bb" in namespace "emptydir-7745" to be "Succeeded or Failed"
Jun 23 09:31:14.998: INFO: Pod "pod-9ba2662c-cb30-4415-8703-2f523fb4f0bb": Phase="Pending", Reason="", readiness=false. Elapsed: 43.938339ms
Jun 23 09:31:17.051: INFO: Pod "pod-9ba2662c-cb30-4415-8703-2f523fb4f0bb": Phase="Running", Reason="", readiness=true. Elapsed: 2.096621043s
Jun 23 09:31:19.098: INFO: Pod "pod-9ba2662c-cb30-4415-8703-2f523fb4f0bb": Phase="Running", Reason="", readiness=true. Elapsed: 4.143383695s
Jun 23 09:31:21.142: INFO: Pod "pod-9ba2662c-cb30-4415-8703-2f523fb4f0bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.188169695s
STEP: Saw pod success
Jun 23 09:31:21.142: INFO: Pod "pod-9ba2662c-cb30-4415-8703-2f523fb4f0bb" satisfied condition "Succeeded or Failed"
Jun 23 09:31:21.191: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-9ba2662c-cb30-4415-8703-2f523fb4f0bb container test-container: <nil>
STEP: delete the pod
Jun 23 09:31:21.288: INFO: Waiting for pod pod-9ba2662c-cb30-4415-8703-2f523fb4f0bb to disappear
Jun 23 09:31:21.339: INFO: Pod pod-9ba2662c-cb30-4415-8703-2f523fb4f0bb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:6.853 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":40,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 61 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:380
    should support exec
    test/e2e/kubectl/kubectl.go:392
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":10,"skipped":94,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 7 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:31:22.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1816" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":6,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:22.427: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 413 lines ...
test/e2e/network/common/framework.go:23
  version v1
  test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":13,"skipped":150,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:24.328: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/framework/framework.go:188

... skipping 180 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun 23 09:31:14.364: INFO: Waiting up to 5m0s for pod "pod-ff77fcd1-0753-48c7-9796-3ae206d82337" in namespace "emptydir-9489" to be "Succeeded or Failed"
Jun 23 09:31:14.410: INFO: Pod "pod-ff77fcd1-0753-48c7-9796-3ae206d82337": Phase="Pending", Reason="", readiness=false. Elapsed: 45.980998ms
Jun 23 09:31:16.458: INFO: Pod "pod-ff77fcd1-0753-48c7-9796-3ae206d82337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09403973s
Jun 23 09:31:18.505: INFO: Pod "pod-ff77fcd1-0753-48c7-9796-3ae206d82337": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141012429s
Jun 23 09:31:20.551: INFO: Pod "pod-ff77fcd1-0753-48c7-9796-3ae206d82337": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186972175s
Jun 23 09:31:22.620: INFO: Pod "pod-ff77fcd1-0753-48c7-9796-3ae206d82337": Phase="Pending", Reason="", readiness=false. Elapsed: 8.256262465s
Jun 23 09:31:24.668: INFO: Pod "pod-ff77fcd1-0753-48c7-9796-3ae206d82337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.303915991s
STEP: Saw pod success
Jun 23 09:31:24.668: INFO: Pod "pod-ff77fcd1-0753-48c7-9796-3ae206d82337" satisfied condition "Succeeded or Failed"
Jun 23 09:31:24.713: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-ff77fcd1-0753-48c7-9796-3ae206d82337 container test-container: <nil>
STEP: delete the pod
Jun 23 09:31:24.866: INFO: Waiting for pod pod-ff77fcd1-0753-48c7-9796-3ae206d82337 to disappear
Jun 23 09:31:24.919: INFO: Pod pod-ff77fcd1-0753-48c7-9796-3ae206d82337 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:11.033 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":100,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:25.033: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:188

... skipping 37 lines ...
Jun 23 09:31:15.394: INFO: PersistentVolumeClaim pvc-jgfhw found but phase is Pending instead of Bound.
Jun 23 09:31:17.455: INFO: PersistentVolumeClaim pvc-jgfhw found and phase=Bound (4.162260743s)
Jun 23 09:31:17.455: INFO: Waiting up to 3m0s for PersistentVolume local-n6qj7 to have phase Bound
Jun 23 09:31:17.504: INFO: PersistentVolume local-n6qj7 found and phase=Bound (48.622916ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8sgc
STEP: Creating a pod to test subpath
Jun 23 09:31:17.662: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8sgc" in namespace "provisioning-4762" to be "Succeeded or Failed"
Jun 23 09:31:17.709: INFO: Pod "pod-subpath-test-preprovisionedpv-8sgc": Phase="Pending", Reason="", readiness=false. Elapsed: 47.043545ms
Jun 23 09:31:19.761: INFO: Pod "pod-subpath-test-preprovisionedpv-8sgc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098993344s
Jun 23 09:31:21.807: INFO: Pod "pod-subpath-test-preprovisionedpv-8sgc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145194836s
Jun 23 09:31:23.853: INFO: Pod "pod-subpath-test-preprovisionedpv-8sgc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190995838s
Jun 23 09:31:25.905: INFO: Pod "pod-subpath-test-preprovisionedpv-8sgc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.242945095s
STEP: Saw pod success
Jun 23 09:31:25.905: INFO: Pod "pod-subpath-test-preprovisionedpv-8sgc" satisfied condition "Succeeded or Failed"
Jun 23 09:31:25.949: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-subpath-test-preprovisionedpv-8sgc container test-container-subpath-preprovisionedpv-8sgc: <nil>
STEP: delete the pod
Jun 23 09:31:26.059: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8sgc to disappear
Jun 23 09:31:26.111: INFO: Pod pod-subpath-test-preprovisionedpv-8sgc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8sgc
Jun 23 09:31:26.111: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8sgc" in namespace "provisioning-4762"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:382
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:26.956: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/framework/framework.go:188

... skipping 41 lines ...
Jun 23 09:31:15.722: INFO: PersistentVolumeClaim pvc-hk2pm found but phase is Pending instead of Bound.
Jun 23 09:31:17.773: INFO: PersistentVolumeClaim pvc-hk2pm found and phase=Bound (8.223055374s)
Jun 23 09:31:17.773: INFO: Waiting up to 3m0s for PersistentVolume local-wlh6z to have phase Bound
Jun 23 09:31:17.821: INFO: PersistentVolume local-wlh6z found and phase=Bound (48.522226ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6mk5
STEP: Creating a pod to test subpath
Jun 23 09:31:17.957: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6mk5" in namespace "provisioning-9990" to be "Succeeded or Failed"
Jun 23 09:31:18.000: INFO: Pod "pod-subpath-test-preprovisionedpv-6mk5": Phase="Pending", Reason="", readiness=false. Elapsed: 43.088441ms
Jun 23 09:31:20.046: INFO: Pod "pod-subpath-test-preprovisionedpv-6mk5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088931613s
Jun 23 09:31:22.110: INFO: Pod "pod-subpath-test-preprovisionedpv-6mk5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153197021s
Jun 23 09:31:24.157: INFO: Pod "pod-subpath-test-preprovisionedpv-6mk5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.199644936s
Jun 23 09:31:26.218: INFO: Pod "pod-subpath-test-preprovisionedpv-6mk5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.260494175s
STEP: Saw pod success
Jun 23 09:31:26.218: INFO: Pod "pod-subpath-test-preprovisionedpv-6mk5" satisfied condition "Succeeded or Failed"
Jun 23 09:31:26.267: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-subpath-test-preprovisionedpv-6mk5 container test-container-volume-preprovisionedpv-6mk5: <nil>
STEP: delete the pod
Jun 23 09:31:26.405: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6mk5 to disappear
Jun 23 09:31:26.475: INFO: Pod pod-subpath-test-preprovisionedpv-6mk5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6mk5
Jun 23 09:31:26.475: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6mk5" in namespace "provisioning-9990"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      test/e2e/storage/testsuites/subpath.go:207
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":17,"skipped":128,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:27.328: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 90 lines ...
Jun 23 09:31:14.947: INFO: PersistentVolumeClaim pvc-q7476 found but phase is Pending instead of Bound.
Jun 23 09:31:17.009: INFO: PersistentVolumeClaim pvc-q7476 found and phase=Bound (2.111871292s)
Jun 23 09:31:17.010: INFO: Waiting up to 3m0s for PersistentVolume local-b9w4h to have phase Bound
Jun 23 09:31:17.063: INFO: PersistentVolume local-b9w4h found and phase=Bound (53.147165ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9x4j
STEP: Creating a pod to test subpath
Jun 23 09:31:17.251: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9x4j" in namespace "provisioning-4229" to be "Succeeded or Failed"
Jun 23 09:31:17.356: INFO: Pod "pod-subpath-test-preprovisionedpv-9x4j": Phase="Pending", Reason="", readiness=false. Elapsed: 104.72626ms
Jun 23 09:31:19.405: INFO: Pod "pod-subpath-test-preprovisionedpv-9x4j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154298846s
Jun 23 09:31:21.456: INFO: Pod "pod-subpath-test-preprovisionedpv-9x4j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204669128s
Jun 23 09:31:23.505: INFO: Pod "pod-subpath-test-preprovisionedpv-9x4j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253668252s
Jun 23 09:31:25.556: INFO: Pod "pod-subpath-test-preprovisionedpv-9x4j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.304763459s
STEP: Saw pod success
Jun 23 09:31:25.556: INFO: Pod "pod-subpath-test-preprovisionedpv-9x4j" satisfied condition "Succeeded or Failed"
Jun 23 09:31:25.603: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-preprovisionedpv-9x4j container test-container-volume-preprovisionedpv-9x4j: <nil>
STEP: delete the pod
Jun 23 09:31:25.707: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9x4j to disappear
Jun 23 09:31:25.753: INFO: Pod pod-subpath-test-preprovisionedpv-9x4j no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9x4j
Jun 23 09:31:25.753: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9x4j" in namespace "provisioning-4229"
... skipping 53 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:31:04.916: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Jun 23 09:31:16.038: INFO: PersistentVolumeClaim pvc-bkdjd found but phase is Pending instead of Bound.
Jun 23 09:31:18.082: INFO: PersistentVolumeClaim pvc-bkdjd found and phase=Bound (6.186044351s)
Jun 23 09:31:18.082: INFO: Waiting up to 3m0s for PersistentVolume local-2jh9l to have phase Bound
Jun 23 09:31:18.125: INFO: PersistentVolume local-2jh9l found and phase=Bound (43.283224ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zvrx
STEP: Creating a pod to test subpath
Jun 23 09:31:18.266: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zvrx" in namespace "provisioning-2417" to be "Succeeded or Failed"
Jun 23 09:31:18.312: INFO: Pod "pod-subpath-test-preprovisionedpv-zvrx": Phase="Pending", Reason="", readiness=false. Elapsed: 45.955312ms
Jun 23 09:31:20.360: INFO: Pod "pod-subpath-test-preprovisionedpv-zvrx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093849935s
Jun 23 09:31:22.407: INFO: Pod "pod-subpath-test-preprovisionedpv-zvrx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141138027s
Jun 23 09:31:24.451: INFO: Pod "pod-subpath-test-preprovisionedpv-zvrx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185327121s
Jun 23 09:31:26.509: INFO: Pod "pod-subpath-test-preprovisionedpv-zvrx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.243168031s
STEP: Saw pod success
Jun 23 09:31:26.509: INFO: Pod "pod-subpath-test-preprovisionedpv-zvrx" satisfied condition "Succeeded or Failed"
Jun 23 09:31:26.554: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-subpath-test-preprovisionedpv-zvrx container test-container-subpath-preprovisionedpv-zvrx: <nil>
STEP: delete the pod
Jun 23 09:31:26.730: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zvrx to disappear
Jun 23 09:31:26.801: INFO: Pod pod-subpath-test-preprovisionedpv-zvrx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zvrx
Jun 23 09:31:26.801: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zvrx" in namespace "provisioning-2417"
... skipping 32 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating configMap with name projected-configmap-test-volume-5e0f9655-7d84-42ca-b539-ba6ad2d004bf
STEP: Creating a pod to test consume configMaps
Jun 23 09:31:21.360: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b28a38d7-518b-4d4e-8c68-395c9b50bd73" in namespace "projected-7438" to be "Succeeded or Failed"
Jun 23 09:31:21.411: INFO: Pod "pod-projected-configmaps-b28a38d7-518b-4d4e-8c68-395c9b50bd73": Phase="Pending", Reason="", readiness=false. Elapsed: 50.820474ms
Jun 23 09:31:23.462: INFO: Pod "pod-projected-configmaps-b28a38d7-518b-4d4e-8c68-395c9b50bd73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101288491s
Jun 23 09:31:25.508: INFO: Pod "pod-projected-configmaps-b28a38d7-518b-4d4e-8c68-395c9b50bd73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148005753s
Jun 23 09:31:27.564: INFO: Pod "pod-projected-configmaps-b28a38d7-518b-4d4e-8c68-395c9b50bd73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.203083874s
STEP: Saw pod success
Jun 23 09:31:27.564: INFO: Pod "pod-projected-configmaps-b28a38d7-518b-4d4e-8c68-395c9b50bd73" satisfied condition "Succeeded or Failed"
Jun 23 09:31:27.611: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod pod-projected-configmaps-b28a38d7-518b-4d4e-8c68-395c9b50bd73 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun 23 09:31:27.742: INFO: Waiting for pod pod-projected-configmaps-b28a38d7-518b-4d4e-8c68-395c9b50bd73 to disappear
Jun 23 09:31:27.789: INFO: Pod pod-projected-configmaps-b28a38d7-518b-4d4e-8c68-395c9b50bd73 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:6.968 seconds]
[sig-storage] Projected configMap
test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":97,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 184 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] provisioning
    test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      test/e2e/storage/testsuites/provisioning.go:421
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":8,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:28.154: INFO: Only supported for providers [azure] (not gce)
... skipping 46 lines ...
• [SLOW TEST:85.654 seconds]
[sig-storage] Projected secret
test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:28.388: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/downwardapi_volume.go:93
STEP: Creating a pod to test downward API volume plugin
Jun 23 09:31:20.125: INFO: Waiting up to 5m0s for pod "metadata-volume-d884f375-ed41-44f6-9e52-18fc7dc81904" in namespace "downward-api-1331" to be "Succeeded or Failed"
Jun 23 09:31:20.172: INFO: Pod "metadata-volume-d884f375-ed41-44f6-9e52-18fc7dc81904": Phase="Pending", Reason="", readiness=false. Elapsed: 47.090762ms
Jun 23 09:31:22.231: INFO: Pod "metadata-volume-d884f375-ed41-44f6-9e52-18fc7dc81904": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105689155s
Jun 23 09:31:24.275: INFO: Pod "metadata-volume-d884f375-ed41-44f6-9e52-18fc7dc81904": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149699391s
Jun 23 09:31:26.328: INFO: Pod "metadata-volume-d884f375-ed41-44f6-9e52-18fc7dc81904": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202535787s
Jun 23 09:31:28.372: INFO: Pod "metadata-volume-d884f375-ed41-44f6-9e52-18fc7dc81904": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.246302146s
STEP: Saw pod success
Jun 23 09:31:28.373: INFO: Pod "metadata-volume-d884f375-ed41-44f6-9e52-18fc7dc81904" satisfied condition "Succeeded or Failed"
Jun 23 09:31:28.419: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod metadata-volume-d884f375-ed41-44f6-9e52-18fc7dc81904 container client-container: <nil>
STEP: delete the pod
Jun 23 09:31:28.519: INFO: Waiting for pod metadata-volume-d884f375-ed41-44f6-9e52-18fc7dc81904 to disappear
Jun 23 09:31:28.567: INFO: Pod metadata-volume-d884f375-ed41-44f6-9e52-18fc7dc81904 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:188
... skipping 138 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    test/e2e/storage/framework/testsuite.go:50
      should support two pods which have the same volume definition
      test/e2e/storage/testsuites/ephemeral.go:216
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":3,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:29.124: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/framework/framework.go:188

... skipping 90 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 39 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:31:32.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4356" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":18,"skipped":139,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:32.355: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/framework/framework.go:188

... skipping 49 lines ...
• [SLOW TEST:7.344 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":12,"skipped":101,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 6 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:382
Jun 23 09:31:22.717: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 09:31:22.769: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-5rs9
STEP: Creating a pod to test subpath
Jun 23 09:31:22.833: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-5rs9" in namespace "provisioning-856" to be "Succeeded or Failed"
Jun 23 09:31:22.886: INFO: Pod "pod-subpath-test-inlinevolume-5rs9": Phase="Pending", Reason="", readiness=false. Elapsed: 53.462666ms
Jun 23 09:31:24.934: INFO: Pod "pod-subpath-test-inlinevolume-5rs9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101201004s
Jun 23 09:31:26.986: INFO: Pod "pod-subpath-test-inlinevolume-5rs9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152983437s
Jun 23 09:31:29.034: INFO: Pod "pod-subpath-test-inlinevolume-5rs9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201610843s
Jun 23 09:31:31.083: INFO: Pod "pod-subpath-test-inlinevolume-5rs9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249988113s
Jun 23 09:31:33.139: INFO: Pod "pod-subpath-test-inlinevolume-5rs9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.305909027s
STEP: Saw pod success
Jun 23 09:31:33.139: INFO: Pod "pod-subpath-test-inlinevolume-5rs9" satisfied condition "Succeeded or Failed"
Jun 23 09:31:33.196: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-subpath-test-inlinevolume-5rs9 container test-container-subpath-inlinevolume-5rs9: <nil>
STEP: delete the pod
Jun 23 09:31:33.301: INFO: Waiting for pod pod-subpath-test-inlinevolume-5rs9 to disappear
Jun 23 09:31:33.349: INFO: Pod pod-subpath-test-inlinevolume-5rs9 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-5rs9
Jun 23 09:31:33.349: INFO: Deleting pod "pod-subpath-test-inlinevolume-5rs9" in namespace "provisioning-856"
... skipping 12 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:382
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":107,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 50 lines ...
  test/e2e/common/node/runtime.go:43
    on terminated container
    test/e2e/common/node/runtime.go:136
      should report termination message if TerminationMessagePath is set [NodeConformance]
      test/e2e/common/node/runtime.go:173
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":12,"skipped":83,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:34.128: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating configMap with name projected-configmap-test-volume-map-3bd1c023-05e9-428a-9e97-4d4a6dbf5d3d
STEP: Creating a pod to test consume configMaps
Jun 23 09:31:28.385: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4bc5d3e7-3a29-4f5b-8b75-89251d58e4d0" in namespace "projected-2266" to be "Succeeded or Failed"
Jun 23 09:31:28.434: INFO: Pod "pod-projected-configmaps-4bc5d3e7-3a29-4f5b-8b75-89251d58e4d0": Phase="Pending", Reason="", readiness=false. Elapsed: 48.838836ms
Jun 23 09:31:30.485: INFO: Pod "pod-projected-configmaps-4bc5d3e7-3a29-4f5b-8b75-89251d58e4d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099948829s
Jun 23 09:31:32.531: INFO: Pod "pod-projected-configmaps-4bc5d3e7-3a29-4f5b-8b75-89251d58e4d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1462236s
Jun 23 09:31:34.586: INFO: Pod "pod-projected-configmaps-4bc5d3e7-3a29-4f5b-8b75-89251d58e4d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.201299525s
STEP: Saw pod success
Jun 23 09:31:34.586: INFO: Pod "pod-projected-configmaps-4bc5d3e7-3a29-4f5b-8b75-89251d58e4d0" satisfied condition "Succeeded or Failed"
Jun 23 09:31:34.632: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-projected-configmaps-4bc5d3e7-3a29-4f5b-8b75-89251d58e4d0 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 09:31:34.737: INFO: Waiting for pod pod-projected-configmaps-4bc5d3e7-3a29-4f5b-8b75-89251d58e4d0 to disappear
Jun 23 09:31:34.789: INFO: Pod pod-projected-configmaps-4bc5d3e7-3a29-4f5b-8b75-89251d58e4d0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:6.969 seconds]
[sig-storage] Projected configMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":101,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:34.939: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 176 lines ...
test/e2e/storage/utils/framework.go:23
  storage capacity
  test/e2e/storage/csi_mock_volume.go:1100
    unlimited
    test/e2e/storage/csi_mock_volume.go:1158
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":16,"skipped":107,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
... skipping 125 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      test/e2e/storage/testsuites/ephemeral.go:254
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":7,"skipped":34,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:37.212: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 72 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver hostPath doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 148 lines ...
test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  test/e2e/storage/csi_mock_volume.go:1334
    CSIStorageCapacity used, insufficient capacity
    test/e2e/storage/csi_mock_volume.go:1377
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":7,"skipped":47,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:37.470: INFO: Only supported for providers [vsphere] (not gce)
... skipping 67 lines ...
  test/e2e/common/node/runtime.go:43
    on terminated container
    test/e2e/common/node/runtime.go:136
      should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:38.253: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 123 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:31:38.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5871" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":8,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:38.564: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/framework/framework.go:188

... skipping 42 lines ...
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test override all
Jun 23 09:31:28.542: INFO: Waiting up to 5m0s for pod "client-containers-afb61758-5157-4b31-a3fb-68dee20485cc" in namespace "containers-6314" to be "Succeeded or Failed"
Jun 23 09:31:28.585: INFO: Pod "client-containers-afb61758-5157-4b31-a3fb-68dee20485cc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.351933ms
Jun 23 09:31:30.628: INFO: Pod "client-containers-afb61758-5157-4b31-a3fb-68dee20485cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085885723s
Jun 23 09:31:32.679: INFO: Pod "client-containers-afb61758-5157-4b31-a3fb-68dee20485cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136417297s
Jun 23 09:31:34.723: INFO: Pod "client-containers-afb61758-5157-4b31-a3fb-68dee20485cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18014168s
Jun 23 09:31:36.766: INFO: Pod "client-containers-afb61758-5157-4b31-a3fb-68dee20485cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223711761s
Jun 23 09:31:38.809: INFO: Pod "client-containers-afb61758-5157-4b31-a3fb-68dee20485cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.266954705s
STEP: Saw pod success
Jun 23 09:31:38.809: INFO: Pod "client-containers-afb61758-5157-4b31-a3fb-68dee20485cc" satisfied condition "Succeeded or Failed"
Jun 23 09:31:38.854: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod client-containers-afb61758-5157-4b31-a3fb-68dee20485cc container agnhost-container: <nil>
STEP: delete the pod
Jun 23 09:31:38.955: INFO: Waiting for pod client-containers-afb61758-5157-4b31-a3fb-68dee20485cc to disappear
Jun 23 09:31:38.999: INFO: Pod client-containers-afb61758-5157-4b31-a3fb-68dee20485cc no longer exists
[AfterEach] [sig-node] Containers
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:10.919 seconds]
[sig-node] Containers
test/e2e/common/node/framework.go:23
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:39.128: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":3,"skipped":54,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:31:10.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 81 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:256
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:257
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":54,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:39.695: INFO: Driver local doesn't support ext3 -- skipping
... skipping 88 lines ...
• [SLOW TEST:36.364 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  test/e2e/apimachinery/garbage_collector.go:439
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":8,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:41.320: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 79 lines ...
Jun 23 09:31:43.188: INFO: Creating a PV followed by a PVC
Jun 23 09:31:43.277: INFO: Waiting for PV local-pvw5gh5 to bind to PVC pvc-nc65g
Jun 23 09:31:43.277: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-nc65g] to have phase Bound
Jun 23 09:31:43.320: INFO: PersistentVolumeClaim pvc-nc65g found and phase=Bound (42.946633ms)
Jun 23 09:31:43.320: INFO: Waiting up to 3m0s for PersistentVolume local-pvw5gh5 to have phase Bound
Jun 23 09:31:43.363: INFO: PersistentVolume local-pvw5gh5 found and phase=Bound (42.727572ms)
[It] should fail scheduling due to different NodeSelector
  test/e2e/storage/persistent_volumes-local.go:381
STEP: local-volume-type: dir
Jun 23 09:31:43.506: INFO: Waiting up to 5m0s for pod "pod-f4d5b5b9-fd2c-47a6-ab28-c0ff289dbff5" in namespace "persistent-local-volumes-test-2035" to be "Unschedulable"
Jun 23 09:31:43.550: INFO: Pod "pod-f4d5b5b9-fd2c-47a6-ab28-c0ff289dbff5": Phase="Pending", Reason="", readiness=false. Elapsed: 43.93956ms
Jun 23 09:31:43.550: INFO: Pod "pod-f4d5b5b9-fd2c-47a6-ab28-c0ff289dbff5" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 14 lines ...

• [SLOW TEST:9.979 seconds]
[sig-storage] PersistentVolumes-local 
test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  test/e2e/storage/persistent_volumes-local.go:349
    should fail scheduling due to different NodeSelector
    test/e2e/storage/persistent_volumes-local.go:381
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":13,"skipped":102,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:44.240: INFO: Only supported for providers [azure] (not gce)
... skipping 74 lines ...
• [SLOW TEST:11.920 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":19,"skipped":143,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:44.332: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 196 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 74 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:240
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":14,"skipped":177,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:48.129: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 5 lines ...
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver "csi-hostpath" does not support topology - skipping

      test/e2e/storage/testsuites/topology.go:93
------------------------------
... skipping 25 lines ...
• [SLOW TEST:8.893 seconds]
[sig-storage] EmptyDir wrapper volumes
test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":5,"skipped":72,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:48.678: INFO: Only supported for providers [azure] (not gce)
... skipping 154 lines ...
• [SLOW TEST:16.092 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":12,"skipped":108,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:49.699: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 93 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating configMap with name configmap-projected-all-test-volume-9f293a72-2fc9-42c9-984f-9994ace7b7ac
STEP: Creating secret with name secret-projected-all-test-volume-b0129d7b-e904-4453-9491-d4ed5b7ae528
STEP: Creating a pod to test Check all projections for projected volume plugin
Jun 23 09:31:39.628: INFO: Waiting up to 5m0s for pod "projected-volume-3ec2b472-1a06-44e6-8782-78f60180276b" in namespace "projected-9278" to be "Succeeded or Failed"
Jun 23 09:31:39.675: INFO: Pod "projected-volume-3ec2b472-1a06-44e6-8782-78f60180276b": Phase="Pending", Reason="", readiness=false. Elapsed: 46.954278ms
Jun 23 09:31:41.720: INFO: Pod "projected-volume-3ec2b472-1a06-44e6-8782-78f60180276b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091400098s
Jun 23 09:31:43.766: INFO: Pod "projected-volume-3ec2b472-1a06-44e6-8782-78f60180276b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137482446s
Jun 23 09:31:45.814: INFO: Pod "projected-volume-3ec2b472-1a06-44e6-8782-78f60180276b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185714055s
Jun 23 09:31:47.861: INFO: Pod "projected-volume-3ec2b472-1a06-44e6-8782-78f60180276b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.232506139s
Jun 23 09:31:49.904: INFO: Pod "projected-volume-3ec2b472-1a06-44e6-8782-78f60180276b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.275951512s
STEP: Saw pod success
Jun 23 09:31:49.904: INFO: Pod "projected-volume-3ec2b472-1a06-44e6-8782-78f60180276b" satisfied condition "Succeeded or Failed"
Jun 23 09:31:49.950: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod projected-volume-3ec2b472-1a06-44e6-8782-78f60180276b container projected-all-volume-test: <nil>
STEP: delete the pod
Jun 23 09:31:50.048: INFO: Waiting for pod projected-volume-3ec2b472-1a06-44e6-8782-78f60180276b to disappear
Jun 23 09:31:50.091: INFO: Pod projected-volume-3ec2b472-1a06-44e6-8782-78f60180276b no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:11.000 seconds]
[sig-storage] Projected combined
test/e2e/common/storage/framework.go:23
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 6 lines ...
[It] should support file as subpath [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:232
Jun 23 09:31:22.850: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 09:31:22.900: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-qgdl
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 09:31:22.961: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qgdl" in namespace "provisioning-2612" to be "Succeeded or Failed"
Jun 23 09:31:23.003: INFO: Pod "pod-subpath-test-inlinevolume-qgdl": Phase="Pending", Reason="", readiness=false. Elapsed: 42.693586ms
Jun 23 09:31:25.047: INFO: Pod "pod-subpath-test-inlinevolume-qgdl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086421239s
Jun 23 09:31:27.096: INFO: Pod "pod-subpath-test-inlinevolume-qgdl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134841355s
Jun 23 09:31:29.143: INFO: Pod "pod-subpath-test-inlinevolume-qgdl": Phase="Running", Reason="", readiness=true. Elapsed: 6.181889388s
Jun 23 09:31:31.188: INFO: Pod "pod-subpath-test-inlinevolume-qgdl": Phase="Running", Reason="", readiness=true. Elapsed: 8.227522674s
Jun 23 09:31:33.235: INFO: Pod "pod-subpath-test-inlinevolume-qgdl": Phase="Running", Reason="", readiness=true. Elapsed: 10.274016327s
... skipping 4 lines ...
Jun 23 09:31:43.478: INFO: Pod "pod-subpath-test-inlinevolume-qgdl": Phase="Running", Reason="", readiness=true. Elapsed: 20.51719704s
Jun 23 09:31:45.523: INFO: Pod "pod-subpath-test-inlinevolume-qgdl": Phase="Running", Reason="", readiness=true. Elapsed: 22.562612899s
Jun 23 09:31:47.569: INFO: Pod "pod-subpath-test-inlinevolume-qgdl": Phase="Running", Reason="", readiness=true. Elapsed: 24.608230402s
Jun 23 09:31:49.615: INFO: Pod "pod-subpath-test-inlinevolume-qgdl": Phase="Running", Reason="", readiness=true. Elapsed: 26.654728199s
Jun 23 09:31:51.662: INFO: Pod "pod-subpath-test-inlinevolume-qgdl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.70145018s
STEP: Saw pod success
Jun 23 09:31:51.662: INFO: Pod "pod-subpath-test-inlinevolume-qgdl" satisfied condition "Succeeded or Failed"
Jun 23 09:31:51.707: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-inlinevolume-qgdl container test-container-subpath-inlinevolume-qgdl: <nil>
STEP: delete the pod
Jun 23 09:31:51.806: INFO: Waiting for pod pod-subpath-test-inlinevolume-qgdl to disappear
Jun 23 09:31:51.851: INFO: Pod pod-subpath-test-inlinevolume-qgdl no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-qgdl
Jun 23 09:31:51.851: INFO: Deleting pod "pod-subpath-test-inlinevolume-qgdl" in namespace "provisioning-2612"
... skipping 12 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:232
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":61,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 23 lines ...
  test/e2e/common/node/runtime.go:43
    on terminated container
    test/e2e/common/node/runtime.go:136
      should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":162,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 16 lines ...
test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  test/e2e/common/node/security_context.go:106
    should not run with an explicit root user ID [LinuxOnly]
    test/e2e/common/node/security_context.go:141
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":13,"skipped":124,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:56.328: INFO: Only supported for providers [openstack] (not gce)
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_configmap.go:61
STEP: Creating configMap with name projected-configmap-test-volume-fdae4293-8ec4-4aad-8ff2-567c64f8075e
STEP: Creating a pod to test consume configMaps
Jun 23 09:31:49.124: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e8a3705d-3a3c-4af8-bf00-2431567b929f" in namespace "projected-7417" to be "Succeeded or Failed"
Jun 23 09:31:49.171: INFO: Pod "pod-projected-configmaps-e8a3705d-3a3c-4af8-bf00-2431567b929f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.42343ms
Jun 23 09:31:51.218: INFO: Pod "pod-projected-configmaps-e8a3705d-3a3c-4af8-bf00-2431567b929f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093045532s
Jun 23 09:31:53.269: INFO: Pod "pod-projected-configmaps-e8a3705d-3a3c-4af8-bf00-2431567b929f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144855584s
Jun 23 09:31:55.319: INFO: Pod "pod-projected-configmaps-e8a3705d-3a3c-4af8-bf00-2431567b929f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194195927s
Jun 23 09:31:57.370: INFO: Pod "pod-projected-configmaps-e8a3705d-3a3c-4af8-bf00-2431567b929f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.245892271s
STEP: Saw pod success
Jun 23 09:31:57.370: INFO: Pod "pod-projected-configmaps-e8a3705d-3a3c-4af8-bf00-2431567b929f" satisfied condition "Succeeded or Failed"
Jun 23 09:31:57.422: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-projected-configmaps-e8a3705d-3a3c-4af8-bf00-2431567b929f container agnhost-container: <nil>
STEP: delete the pod
Jun 23 09:31:57.523: INFO: Waiting for pod pod-projected-configmaps-e8a3705d-3a3c-4af8-bf00-2431567b929f to disappear
Jun 23 09:31:57.569: INFO: Pod pod-projected-configmaps-e8a3705d-3a3c-4af8-bf00-2431567b929f no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:8.970 seconds]
[sig-storage] Projected configMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_configmap.go:61
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":76,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:31:57.700: INFO: Only supported for providers [azure] (not gce)
... skipping 121 lines ...
• [SLOW TEST:14.739 seconds]
[sig-node] Containers
test/e2e/common/node/framework.go:23
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":15,"skipped":142,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:31:27.370: INFO: >>> kubeConfig: /root/.kube/config
... skipping 19 lines ...
Jun 23 09:31:44.612: INFO: PersistentVolumeClaim pvc-qnz6k found but phase is Pending instead of Bound.
Jun 23 09:31:46.668: INFO: PersistentVolumeClaim pvc-qnz6k found and phase=Bound (8.280378532s)
Jun 23 09:31:46.668: INFO: Waiting up to 3m0s for PersistentVolume local-4bhmb to have phase Bound
Jun 23 09:31:46.726: INFO: PersistentVolume local-4bhmb found and phase=Bound (58.075994ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qvsv
STEP: Creating a pod to test subpath
Jun 23 09:31:46.882: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qvsv" in namespace "provisioning-1994" to be "Succeeded or Failed"
Jun 23 09:31:46.939: INFO: Pod "pod-subpath-test-preprovisionedpv-qvsv": Phase="Pending", Reason="", readiness=false. Elapsed: 57.628607ms
Jun 23 09:31:48.986: INFO: Pod "pod-subpath-test-preprovisionedpv-qvsv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103714225s
Jun 23 09:31:51.032: INFO: Pod "pod-subpath-test-preprovisionedpv-qvsv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150436215s
Jun 23 09:31:53.082: INFO: Pod "pod-subpath-test-preprovisionedpv-qvsv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.199965237s
Jun 23 09:31:55.130: INFO: Pod "pod-subpath-test-preprovisionedpv-qvsv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.248104734s
Jun 23 09:31:57.180: INFO: Pod "pod-subpath-test-preprovisionedpv-qvsv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.298361488s
Jun 23 09:31:59.228: INFO: Pod "pod-subpath-test-preprovisionedpv-qvsv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.346438031s
STEP: Saw pod success
Jun 23 09:31:59.228: INFO: Pod "pod-subpath-test-preprovisionedpv-qvsv" satisfied condition "Succeeded or Failed"
Jun 23 09:31:59.275: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-subpath-test-preprovisionedpv-qvsv container test-container-subpath-preprovisionedpv-qvsv: <nil>
STEP: delete the pod
Jun 23 09:31:59.385: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qvsv to disappear
Jun 23 09:31:59.431: INFO: Pod pod-subpath-test-preprovisionedpv-qvsv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qvsv
Jun 23 09:31:59.431: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qvsv" in namespace "provisioning-1994"
... skipping 26 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:367
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":16,"skipped":142,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:00.499: INFO: Only supported for providers [vsphere] (not gce)
... skipping 86 lines ...
• [SLOW TEST:12.398 seconds]
[sig-auth] ServiceAccounts
test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":11,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 11 lines ...
STEP: Destroying namespace "apply-2414" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  test/e2e/apimachinery/apply.go:59

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for subresources","total":-1,"completed":12,"skipped":55,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:03.487: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 35 lines ...
      test/e2e/storage/testsuites/volumes.go:198

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:31:27.610: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
Jun 23 09:31:44.809: INFO: PersistentVolumeClaim pvc-bvqc2 found but phase is Pending instead of Bound.
Jun 23 09:31:46.853: INFO: PersistentVolumeClaim pvc-bvqc2 found and phase=Bound (8.277831253s)
Jun 23 09:31:46.853: INFO: Waiting up to 3m0s for PersistentVolume local-vtc6r to have phase Bound
Jun 23 09:31:46.895: INFO: PersistentVolume local-vtc6r found and phase=Bound (42.44813ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-5q4h
STEP: Creating a pod to test subpath
Jun 23 09:31:47.050: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5q4h" in namespace "provisioning-6234" to be "Succeeded or Failed"
Jun 23 09:31:47.096: INFO: Pod "pod-subpath-test-preprovisionedpv-5q4h": Phase="Pending", Reason="", readiness=false. Elapsed: 46.501596ms
Jun 23 09:31:49.142: INFO: Pod "pod-subpath-test-preprovisionedpv-5q4h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091943214s
Jun 23 09:31:51.185: INFO: Pod "pod-subpath-test-preprovisionedpv-5q4h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134935103s
Jun 23 09:31:53.232: INFO: Pod "pod-subpath-test-preprovisionedpv-5q4h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182431521s
Jun 23 09:31:55.277: INFO: Pod "pod-subpath-test-preprovisionedpv-5q4h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227031796s
Jun 23 09:31:57.320: INFO: Pod "pod-subpath-test-preprovisionedpv-5q4h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.270621899s
Jun 23 09:31:59.365: INFO: Pod "pod-subpath-test-preprovisionedpv-5q4h": Phase="Pending", Reason="", readiness=false. Elapsed: 12.314865568s
Jun 23 09:32:01.409: INFO: Pod "pod-subpath-test-preprovisionedpv-5q4h": Phase="Pending", Reason="", readiness=false. Elapsed: 14.35874003s
Jun 23 09:32:03.453: INFO: Pod "pod-subpath-test-preprovisionedpv-5q4h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.403118646s
STEP: Saw pod success
Jun 23 09:32:03.453: INFO: Pod "pod-subpath-test-preprovisionedpv-5q4h" satisfied condition "Succeeded or Failed"
Jun 23 09:32:03.496: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-subpath-test-preprovisionedpv-5q4h container test-container-subpath-preprovisionedpv-5q4h: <nil>
STEP: delete the pod
Jun 23 09:32:03.594: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5q4h to disappear
Jun 23 09:32:03.637: INFO: Pod pod-subpath-test-preprovisionedpv-5q4h no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-5q4h
Jun 23 09:32:03.637: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5q4h" in namespace "provisioning-6234"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:367
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":25,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:04.372: INFO: Only supported for providers [vsphere] (not gce)
... skipping 14 lines ...
      Only supported for providers [vsphere] (not gce)

      test/e2e/storage/drivers/in_tree.go:1438
------------------------------
SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":13,"skipped":102,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:31:33.640: INFO: >>> kubeConfig: /root/.kube/config
... skipping 54 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:447
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":14,"skipped":102,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:05.085: INFO: Only supported for providers [azure] (not gce)
... skipping 12 lines ...
      test/e2e/storage/testsuites/volume_expand.go:176

      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:2077
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:31:28.675: INFO: >>> kubeConfig: /root/.kube/config
... skipping 19 lines ...
Jun 23 09:31:44.003: INFO: PersistentVolumeClaim pvc-vqjxm found but phase is Pending instead of Bound.
Jun 23 09:31:46.058: INFO: PersistentVolumeClaim pvc-vqjxm found and phase=Bound (10.293795528s)
Jun 23 09:31:46.059: INFO: Waiting up to 3m0s for PersistentVolume local-9jqn4 to have phase Bound
Jun 23 09:31:46.102: INFO: PersistentVolume local-9jqn4 found and phase=Bound (43.46828ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wl7j
STEP: Creating a pod to test subpath
Jun 23 09:31:46.236: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wl7j" in namespace "provisioning-6736" to be "Succeeded or Failed"
Jun 23 09:31:46.285: INFO: Pod "pod-subpath-test-preprovisionedpv-wl7j": Phase="Pending", Reason="", readiness=false. Elapsed: 48.929095ms
Jun 23 09:31:48.328: INFO: Pod "pod-subpath-test-preprovisionedpv-wl7j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092072748s
Jun 23 09:31:50.372: INFO: Pod "pod-subpath-test-preprovisionedpv-wl7j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135691813s
Jun 23 09:31:52.431: INFO: Pod "pod-subpath-test-preprovisionedpv-wl7j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195015997s
Jun 23 09:31:54.478: INFO: Pod "pod-subpath-test-preprovisionedpv-wl7j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241691479s
Jun 23 09:31:56.522: INFO: Pod "pod-subpath-test-preprovisionedpv-wl7j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.285713241s
Jun 23 09:31:58.566: INFO: Pod "pod-subpath-test-preprovisionedpv-wl7j": Phase="Pending", Reason="", readiness=false. Elapsed: 12.329949074s
Jun 23 09:32:00.611: INFO: Pod "pod-subpath-test-preprovisionedpv-wl7j": Phase="Pending", Reason="", readiness=false. Elapsed: 14.374799664s
Jun 23 09:32:02.657: INFO: Pod "pod-subpath-test-preprovisionedpv-wl7j": Phase="Pending", Reason="", readiness=false. Elapsed: 16.420585227s
Jun 23 09:32:04.701: INFO: Pod "pod-subpath-test-preprovisionedpv-wl7j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.465365417s
STEP: Saw pod success
Jun 23 09:32:04.702: INFO: Pod "pod-subpath-test-preprovisionedpv-wl7j" satisfied condition "Succeeded or Failed"
Jun 23 09:32:04.748: INFO: Trying to get logs from node nodes-us-west4-a-6v6c pod pod-subpath-test-preprovisionedpv-wl7j container test-container-subpath-preprovisionedpv-wl7j: <nil>
STEP: delete the pod
Jun 23 09:32:04.990: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wl7j to disappear
Jun 23 09:32:05.046: INFO: Pod pod-subpath-test-preprovisionedpv-wl7j no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wl7j
Jun 23 09:32:05.046: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wl7j" in namespace "provisioning-6736"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:382
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:06.057: INFO: Driver local doesn't support ext3 -- skipping
... skipping 248 lines ...
      test/e2e/storage/testsuites/volumes.go:161

      Driver "local" does not provide raw block - skipping

      test/e2e/storage/testsuites/volumes.go:114
------------------------------
{"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]","total":-1,"completed":21,"skipped":166,"failed":0}
[BeforeEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:32:09.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:32:10.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-780" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":22,"skipped":166,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Runtime
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 31 lines ...
  test/e2e/common/node/runtime.go:43
    when starting a container that exits
    test/e2e/common/node/runtime.go:44
      should run with the expected status [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":123,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:32:11.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2764" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":23,"skipped":167,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:11.623: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 64 lines ...
• [SLOW TEST:89.686 seconds]
[sig-storage] Secrets
test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":56,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:11.741: INFO: Only supported for providers [azure] (not gce)
... skipping 63 lines ...
• [SLOW TEST:11.954 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":17,"skipped":156,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:10.492 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":13,"skipped":65,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:14.052: INFO: Driver local doesn't support ext3 -- skipping
... skipping 56 lines ...
      test/e2e/storage/testsuites/fsgroupchangepolicy.go:216

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":111,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:31:59.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 66 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:240
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":15,"skipped":111,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:14.794: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 35 lines ...
      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:1576
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":9,"skipped":73,"failed":0}
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:31:41.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  test/e2e/common/network/networking.go:32
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 26 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:32:16.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8521" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":18,"skipped":161,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:16.541: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 60 lines ...
• [SLOW TEST:7.053 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should observe PodDisruptionBudget status updated [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":11,"skipped":129,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:32:18.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1162" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":16,"skipped":117,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
Jun 23 09:32:15.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating a pod to test service account token: 
Jun 23 09:32:15.939: INFO: Waiting up to 5m0s for pod "test-pod-41d0c9aa-1dc1-4cfc-a18f-64c79c05072e" in namespace "svcaccounts-5855" to be "Succeeded or Failed"
Jun 23 09:32:15.985: INFO: Pod "test-pod-41d0c9aa-1dc1-4cfc-a18f-64c79c05072e": Phase="Pending", Reason="", readiness=false. Elapsed: 45.936875ms
Jun 23 09:32:18.030: INFO: Pod "test-pod-41d0c9aa-1dc1-4cfc-a18f-64c79c05072e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091400427s
Jun 23 09:32:20.077: INFO: Pod "test-pod-41d0c9aa-1dc1-4cfc-a18f-64c79c05072e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138239928s
STEP: Saw pod success
Jun 23 09:32:20.077: INFO: Pod "test-pod-41d0c9aa-1dc1-4cfc-a18f-64c79c05072e" satisfied condition "Succeeded or Failed"
Jun 23 09:32:20.126: INFO: Trying to get logs from node nodes-us-west4-a-shvt pod test-pod-41d0c9aa-1dc1-4cfc-a18f-64c79c05072e container agnhost-container: <nil>
STEP: delete the pod
Jun 23 09:32:20.253: INFO: Waiting for pod test-pod-41d0c9aa-1dc1-4cfc-a18f-64c79c05072e to disappear
Jun 23 09:32:20.298: INFO: Pod test-pod-41d0c9aa-1dc1-4cfc-a18f-64c79c05072e no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:188
Jun 23 09:32:20.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5855" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":11,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:20.418: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 111 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
STEP: Creating configMap with name configmap-test-volume-map-6fde367f-7572-43bc-a869-e9151e1216b0
STEP: Creating a pod to test consume configMaps
Jun 23 09:32:12.512: INFO: Waiting up to 5m0s for pod "pod-configmaps-6651b5f0-1dcd-4d11-acd8-e8ac6b5778f2" in namespace "configmap-5100" to be "Succeeded or Failed"
Jun 23 09:32:12.611: INFO: Pod "pod-configmaps-6651b5f0-1dcd-4d11-acd8-e8ac6b5778f2": Phase="Pending", Reason="", readiness=false. Elapsed: 99.053855ms
Jun 23 09:32:14.729: INFO: Pod "pod-configmaps-6651b5f0-1dcd-4d11-acd8-e8ac6b5778f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21739705s
Jun 23 09:32:16.788: INFO: Pod "pod-configmaps-6651b5f0-1dcd-4d11-acd8-e8ac6b5778f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.275873404s
Jun 23 09:32:18.837: INFO: Pod "pod-configmaps-6651b5f0-1dcd-4d11-acd8-e8ac6b5778f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.32533244s
Jun 23 09:32:20.883: INFO: Pod "pod-configmaps-6651b5f0-1dcd-4d11-acd8-e8ac6b5778f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.37163468s
STEP: Saw pod success
Jun 23 09:32:20.883: INFO: Pod "pod-configmaps-6651b5f0-1dcd-4d11-acd8-e8ac6b5778f2" satisfied condition "Succeeded or Failed"
Jun 23 09:32:20.928: INFO: Trying to get logs from node nodes-us-west4-a-p9s4 pod pod-configmaps-6651b5f0-1dcd-4d11-acd8-e8ac6b5778f2 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 09:32:21.071: INFO: Waiting for pod pod-configmaps-6651b5f0-1dcd-4d11-acd8-e8ac6b5778f2 to disappear
Jun 23 09:32:21.125: INFO: Pod pod-configmaps-6651b5f0-1dcd-4d11-acd8-e8ac6b5778f2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:188
... skipping 4 lines ...
• [SLOW TEST:9.505 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":77,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:21.329: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 88 lines ...
Jun 23 09:32:15.383: INFO: PersistentVolumeClaim pvc-7ctd2 found but phase is Pending instead of Bound.
Jun 23 09:32:17.437: INFO: PersistentVolumeClaim pvc-7ctd2 found and phase=Bound (8.26068699s)
Jun 23 09:32:17.438: INFO: Waiting up to 3m0s for PersistentVolume local-bgnjl to have phase Bound
Jun 23 09:32:17.483: INFO: PersistentVolume local-bgnjl found and phase=Bound (44.94909ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-94fs
STEP: Creating a pod to test subpath
Jun 23 09:32:17.657: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-94fs" in namespace "provisioning-3633" to be "Succeeded or Failed"
Jun 23 09:32:17.715: INFO: Pod "pod-subpath-test-preprovisionedpv-94fs": Phase="Pending", Reason="", readiness=false. Elapsed: 57.776644ms
Jun 23 09:32:19.759: INFO: Pod "pod-subpath-test-preprovisionedpv-94fs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102150919s
Jun 23 09:32:21.813: INFO: Pod "pod-subpath-test-preprovisionedpv-94fs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155331457s
Jun 23 09:32:23.857: INFO: Pod "pod-subpath-test-preprovisionedpv-94fs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.200088274s
STEP: Saw pod success
Jun 23 09:32:23.857: INFO: Pod "pod-subpath-test-preprovisionedpv-94fs" satisfied condition "Succeeded or Failed"
Jun 23 09:32:23.902: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-subpath-test-preprovisionedpv-94fs container test-container-volume-preprovisionedpv-94fs: <nil>
STEP: delete the pod
Jun 23 09:32:24.009: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-94fs to disappear
Jun 23 09:32:24.051: INFO: Pod pod-subpath-test-preprovisionedpv-94fs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-94fs
Jun 23 09:32:24.051: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-94fs" in namespace "provisioning-3633"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 37 lines ...
  test/e2e/framework/framework.go:188
Jun 23 09:32:25.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-779" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":7,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:25.254: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:188

... skipping 129 lines ...
Jun 23 09:32:13.882: INFO: PersistentVolumeClaim pvc-gzf5n found but phase is Pending instead of Bound.
Jun 23 09:32:15.939: INFO: PersistentVolumeClaim pvc-gzf5n found and phase=Bound (14.397481009s)
Jun 23 09:32:15.940: INFO: Waiting up to 3m0s for PersistentVolume local-fbr6n to have phase Bound
Jun 23 09:32:15.984: INFO: PersistentVolume local-fbr6n found and phase=Bound (43.984513ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zcng
STEP: Creating a pod to test subpath
Jun 23 09:32:16.160: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zcng" in namespace "provisioning-2390" to be "Succeeded or Failed"
Jun 23 09:32:16.291: INFO: Pod "pod-subpath-test-preprovisionedpv-zcng": Phase="Pending", Reason="", readiness=false. Elapsed: 130.626441ms
Jun 23 09:32:18.366: INFO: Pod "pod-subpath-test-preprovisionedpv-zcng": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205744074s
Jun 23 09:32:20.411: INFO: Pod "pod-subpath-test-preprovisionedpv-zcng": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251080697s
Jun 23 09:32:22.458: INFO: Pod "pod-subpath-test-preprovisionedpv-zcng": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298146747s
Jun 23 09:32:24.505: INFO: Pod "pod-subpath-test-preprovisionedpv-zcng": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.34464225s
STEP: Saw pod success
Jun 23 09:32:24.505: INFO: Pod "pod-subpath-test-preprovisionedpv-zcng" satisfied condition "Succeeded or Failed"
Jun 23 09:32:24.550: INFO: Trying to get logs from node nodes-us-west4-a-pdqm pod pod-subpath-test-preprovisionedpv-zcng container test-container-volume-preprovisionedpv-zcng: <nil>
STEP: delete the pod
Jun 23 09:32:24.650: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zcng to disappear
Jun 23 09:32:24.694: INFO: Pod pod-subpath-test-preprovisionedpv-zcng no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zcng
Jun 23 09:32:24.694: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zcng" in namespace "provisioning-2390"
... skipping 77 lines ...
• [SLOW TEST:8.488 seconds]
[sig-scheduling] LimitRange
test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":12,"skipped":133,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:26.973: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 57 lines ...
• [SLOW TEST:60.509 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":64,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:28.998: INFO: Only supported for providers [azure] (not gce)
... skipping 61 lines ...
Jun 23 09:31:58.817: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:31:58.863: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:31:59.100: INFO: Unable to read jessie_udp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:31:59.146: INFO: Unable to read jessie_tcp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:31:59.192: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:31:59.238: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:31:59.416: INFO: Lookups using dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0 failed for: [wheezy_udp@dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@dns-test-service.dns-3751.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local jessie_udp@dns-test-service.dns-3751.svc.cluster.local jessie_tcp@dns-test-service.dns-3751.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local]

Jun 23 09:32:04.477: INFO: Unable to read wheezy_udp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:04.522: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:04.580: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:04.631: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:04.956: INFO: Unable to read jessie_udp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:05.062: INFO: Unable to read jessie_tcp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:05.241: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:05.287: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:05.523: INFO: Lookups using dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0 failed for: [wheezy_udp@dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@dns-test-service.dns-3751.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local jessie_udp@dns-test-service.dns-3751.svc.cluster.local jessie_tcp@dns-test-service.dns-3751.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local]

Jun 23 09:32:09.462: INFO: Unable to read wheezy_udp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:09.507: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:09.556: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:09.605: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:09.870: INFO: Unable to read jessie_udp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:09.983: INFO: Unable to read jessie_tcp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:10.045: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:10.102: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:10.288: INFO: Lookups using dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0 failed for: [wheezy_udp@dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@dns-test-service.dns-3751.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local jessie_udp@dns-test-service.dns-3751.svc.cluster.local jessie_tcp@dns-test-service.dns-3751.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local]

Jun 23 09:32:14.532: INFO: Unable to read wheezy_udp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:14.719: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:14.773: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:14.829: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:15.149: INFO: Unable to read jessie_udp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:15.203: INFO: Unable to read jessie_tcp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:15.255: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:15.306: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:15.498: INFO: Lookups using dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0 failed for: [wheezy_udp@dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@dns-test-service.dns-3751.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local jessie_udp@dns-test-service.dns-3751.svc.cluster.local jessie_tcp@dns-test-service.dns-3751.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local]

Jun 23 09:32:19.464: INFO: Unable to read wheezy_udp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:19.511: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:19.566: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:19.616: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:19.878: INFO: Unable to read jessie_udp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:19.928: INFO: Unable to read jessie_tcp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:19.977: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:20.022: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:20.227: INFO: Lookups using dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0 failed for: [wheezy_udp@dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@dns-test-service.dns-3751.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local jessie_udp@dns-test-service.dns-3751.svc.cluster.local jessie_tcp@dns-test-service.dns-3751.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local]

Jun 23 09:32:24.465: INFO: Unable to read wheezy_udp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:24.509: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:24.554: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:24.599: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:24.983: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local from pod dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0: the server could not find the requested resource (get pods dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0)
Jun 23 09:32:25.170: INFO: Lookups using dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0 failed for: [wheezy_udp@dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@dns-test-service.dns-3751.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3751.svc.cluster.local]

Jun 23 09:32:31.349: INFO: DNS probes using dns-3751/dns-test-b75bf40c-e5af-44d1-8f1f-922afa4764c0 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:39.806 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:652
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":8,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
... skipping 80 lines ...
• [SLOW TEST:111.754 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
  test/e2e/network/service.go:1922
------------------------------
{"msg":"PASSED [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false","total":-1,"completed":7,"skipped":110,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:36.698: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 64 lines ...
  test/e2e/common/node/runtime.go:43
    when running a container with a new image
    test/e2e/common/node/runtime.go:259
      should be able to pull from private registry with secret [NodeConformance]
      test/e2e/common/node/runtime.go:386
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":17,"skipped":118,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 09:32:41.826: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 69 lines ...
  test/e2e/kubectl/kubectl.go:245
[It] should check if cluster-info dump succeeds
  test/e2e/kubectl/kubectl.go:1104
STEP: running cluster-info dump
Jun 23 09:32:42.208: INFO: Running '/logs/artifacts/0e0612ae-f2d4-11ec-aca4-16bc79448f0b/kubectl --server=https://34.125.171.150 --kubeconfig=/root/.kube/config --namespace=kubectl-7410 cluster-info dump'
Jun 23 09:32:45.762: INFO: stderr: ""
Jun 23 09:32:45.772: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"15764\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"master-us-west4-a-w636\",\n                \"uid\": \"42367b43-818e-4fab-b8fd-d77041cf5f9d\",\n                \"resourceVersion\": \"9268\",\n                \"creationTimestamp\": \"2022-06-23T09:23:11Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"e2-standard-2\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"cloud.google.com/metadata-proxy-ready\": \"true\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"us-west4\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"us-west4-a\",\n                    \"kops.k8s.io/instancegroup\": \"master-us-west4-a\",\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"master-us-west4-a-w636\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"e2-standard-2\",\n                    \"topology.gke.io/zone\": \"us-west4-a\",\n                    \"topology.kubernetes.io/region\": \"us-west4\",\n                    \"topology.kubernetes.io/zone\": \"us-west4-a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"pd.csi.storage.gke.io\\\":\\\"projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/master-us-west4-a-w636\\\"}\",\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.0.44\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.0.203\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.0.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.0.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.0.0/24\"\n                ],\n                \"providerID\": \"gce://k8s-boskos-gce-project-09/us-west4-a/master-us-west4-a-w636\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/control-plane\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48600704Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"8145396Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44790408733\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"8042996Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:24:47Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:47Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:30:22Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:23:08Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:30:22Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:23:08Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:30:22Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:23:08Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:30:22Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:34Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"10.0.16.6\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"34.125.10.89\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"master-us-west4-a-w636\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"39996dee3d467ae6aa6d852fbca9e1d6\",\n                    \"systemUUID\": \"39996dee-3d46-7ae6-aa6d-852fbca9e1d6\",\n                    \"bootID\": \"a8cee1af-f13f-4c89-b7c4-67adff33fb0e\",\n                    \"kernelVersion\": \"5.11.0-1028-gcp\",\n                    \"osImage\": \"Ubuntu 20.04.3 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.6.6\",\n                    \"kubeletVersion\": \"v1.24.2\",\n                    \"kubeProxyVersion\": \"v1.24.2\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/etcdadm/etcd-manager@sha256:0c7f0f377fe216f3687a7690dffa82fc419e1ef1e3f409e95cf4cef528d83d0e\",\n                            \"registry.k8s.io/etcdadm/etcd-manager:v3.0.20220617\"\n                        ],\n                        \"sizeBytes\": 216301173\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/cilium@sha256:79e66c3c2677e9ecc3fd5b2ed8e4ea7e49cf99ed6ee181f2ef43400c4db5eef0\",\n                            \"quay.io/cilium/cilium:v1.11.5\"\n                        ],\n                        \"sizeBytes\": 158523877\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver-amd64@sha256:e31b9dc1170027a5108e880ab1cdc32626fc7c9caf7676fd3af1ec31aad9d57e\",\n                            \"k8s.gcr.io/kube-apiserver-amd64:v1.24.2\"\n                        ],\n                        \"sizeBytes\": 131054035\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager-amd64@sha256:2983c26046cf32a9f877a2e6386b2b392cfb9fea25675220a439f6dae45ae3e5\",\n                            \"k8s.gcr.io/kube-controller-manager-amd64:v1.24.2\"\n                        ],\n                        \"sizeBytes\": 120695097\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.24.2\"\n                        ],\n                        \"sizeBytes\": 111851063\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver@sha256:b316059d0057e2bcb98d680feb99cd16e031260dd4ad0ce6c51ba8a28b48d9b7\",\n                            \"registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.3.4\"\n                        ],\n                        \"sizeBytes\": 69998657\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler-amd64:v1.24.2\"\n                        ],\n                        \"sizeBytes\": 52332856\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/kops/kops-controller:1.24.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 41603157\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/kops/dns-controller:1.24.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 41099876\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/operator@sha256:a6095fedca15081df3bfb70aa627578d642eeaf3b0e0140100c1086fd47bbfb5\",\n                            \"quay.io/cilium/operator:v1.11.5\"\n                        ],\n                        \"sizeBytes\": 23918935\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"registry.k8s.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 21205045\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"registry.k8s.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 20103959\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"registry.k8s.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/sig-storage/csi-snapshotter@sha256:9af9bf28430b00a0cedeb2ec29acadce45e6afcecd8bdf31c793c624cfa75fa7\",\n                            \"registry.k8s.io/sig-storage/csi-snapshotter:v3.0.3\"\n                        ],\n                        \"sizeBytes\": 19500777\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/k8scloudprovidergcp/cloud-controller-manager@sha256:881fd1095937638040723973ade90e6700f1c831a78fb585a3227c4d021b0df9\",\n                            \"docker.io/k8scloudprovidergcp/cloud-controller-manager:latest\"\n                        ],\n                        \"sizeBytes\": 18702875\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e\",\n                            \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\n                        ],\n                        \"sizeBytes\": 9515805\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a\",\n                            \"k8s.gcr.io/metadata-proxy:v0.1.12\"\n                        ],\n                        \"sizeBytes\": 5301657\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/kops/kube-apiserver-healthcheck:1.24.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 5136082\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/k8s-custom-iptables@sha256:8b1a0831e88973e2937eae3458edb470f20d54bf80d88b6a3355f36266e16ca5\",\n                            \"k8s.gcr.io/k8s-custom-iptables:1.0\"\n                        ],\n                        \"sizeBytes\": 3335579\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\",\n                            \"registry.k8s.io/pause:3.6\"\n                        ],\n                        \"sizeBytes\": 301773\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"nodes-us-west4-a-6v6c\",\n                \"uid\": \"5272b43d-2fa5-4e98-b695-fa88775922e8\",\n                \"resourceVersion\": \"14443\",\n                \"creationTimestamp\": \"2022-06-23T09:24:27Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"n1-standard-2\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"cloud.google.com/metadata-proxy-ready\": \"true\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"us-west4\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"us-west4-a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-us-west4-a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"nodes-us-west4-a-6v6c\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"n1-standard-2\",\n                    \"topology.gke.io/zone\": \"us-west4-a\",\n                    \"topology.hostpath.csi/node\": \"nodes-us-west4-a-6v6c\",\n                    \"topology.kubernetes.io/region\": \"us-west4\",\n                    \"topology.kubernetes.io/zone\": \"us-west4-a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-mock-csi-mock-volumes-2045\\\":\\\"nodes-us-west4-a-6v6c\\\",\\\"pd.csi.storage.gke.io\\\":\\\"projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-6v6c\\\"}\",\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.4.94\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.4.107\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.4.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.4.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.4.0/24\"\n                ],\n                \"providerID\": \"gce://k8s-boskos-gce-project-09/us-west4-a/nodes-us-west4-a-6v6c\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48600704Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"7629304Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44790408733\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"7526904Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:25:18Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:25:18Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:17Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:27Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:17Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:27Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:17Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:27Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:17Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:25:08Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"10.0.16.5\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"34.125.100.252\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"nodes-us-west4-a-6v6c\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"085652b7520170d956d8f3e14539c88f\",\n                    \"systemUUID\": \"085652b7-5201-70d9-56d8-f3e14539c88f\",\n                    \"bootID\": \"2deacfce-fb15-4352-b7f7-937e074d4ed8\",\n                    \"kernelVersion\": \"5.11.0-1028-gcp\",\n                    \"osImage\": \"Ubuntu 20.04.3 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.6.6\",\n                    \"kubeletVersion\": \"v1.24.2\",\n                    \"kubeProxyVersion\": \"v1.24.2\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/cilium@sha256:79e66c3c2677e9ecc3fd5b2ed8e4ea7e49cf99ed6ee181f2ef43400c4db5eef0\",\n                            \"quay.io/cilium/cilium:v1.11.5\"\n                        ],\n                        \"sizeBytes\": 158523877\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.24.2\"\n                        ],\n                        \"sizeBytes\": 111851063\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver@sha256:b316059d0057e2bcb98d680feb99cd16e031260dd4ad0ce6c51ba8a28b48d9b7\",\n                            \"registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.3.4\"\n                        ],\n                        \"sizeBytes\": 69998657\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.39\"\n                        ],\n                        \"sizeBytes\": 51105200\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-2\"\n                        ],\n                        \"sizeBytes\": 41902010\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\"\n                        ],\n                        \"sizeBytes\": 40764680\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0\"\n                        ],\n                        \"sizeBytes\": 22728994\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.3.0\"\n                        ],\n                        \"sizeBytes\": 21671340\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:818f35653f2e214db81d655063e81995de9073328a3430498624c140881026a3\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.1\"\n                        ],\n                        \"sizeBytes\": 21564520\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.3.0\"\n                        ],\n                        \"sizeBytes\": 21444261\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3\"\n                        ],\n                        \"sizeBytes\": 15224494\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e\",\n                            \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\n                        ],\n                        \"sizeBytes\": 9515805\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0\"\n                        ],\n                        \"sizeBytes\": 8582494\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.4.0\"\n                        ],\n                        \"sizeBytes\": 7960518\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-2\"\n                        ],\n                        \"sizeBytes\": 6979041\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a\",\n                            \"k8s.gcr.io/metadata-proxy:v0.1.12\"\n                        ],\n                        \"sizeBytes\": 5301657\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/k8s-custom-iptables@sha256:8b1a0831e88973e2937eae3458edb470f20d54bf80d88b6a3355f36266e16ca5\",\n                            \"k8s.gcr.io/k8s-custom-iptables:1.0\"\n                        ],\n                        \"sizeBytes\": 3335579\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-2\"\n                        ],\n                        \"sizeBytes\": 732424\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c\",\n                            \"k8s.gcr.io/pause:3.7\"\n                        ],\n                        \"sizeBytes\": 311278\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\",\n                            \"registry.k8s.io/pause:3.6\"\n                        ],\n                        \"sizeBytes\": 301773\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2045^5ae0db6f-f2d7-11ec-9fc5-c66a6ca40e3d\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2045^5ae0db6f-f2d7-11ec-9fc5-c66a6ca40e3d\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"nodes-us-west4-a-p9s4\",\n                \"uid\": \"21af31c9-4c76-4905-b7ea-5b36006dfceb\",\n                \"resourceVersion\": \"15623\",\n                \"creationTimestamp\": \"2022-06-23T09:24:26Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"n1-standard-2\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"cloud.google.com/metadata-proxy-ready\": \"true\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"us-west4\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"us-west4-a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-us-west4-a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"nodes-us-west4-a-p9s4\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"n1-standard-2\",\n                    \"topology.gke.io/zone\": \"us-west4-a\",\n                    \"topology.hostpath.csi/node\": \"nodes-us-west4-a-p9s4\",\n                    \"topology.kubernetes.io/region\": \"us-west4\",\n                    \"topology.kubernetes.io/zone\": \"us-west4-a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"pd.csi.storage.gke.io\\\":\\\"projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-p9s4\\\"}\",\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.3.49\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.3.111\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.3.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.3.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.3.0/24\"\n                ],\n                \"providerID\": \"gce://k8s-boskos-gce-project-09/us-west4-a/nodes-us-west4-a-p9s4\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48600704Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"7629296Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44790408733\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"7526896Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:25:00Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:25:00Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:37Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:26Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:37Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:26Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:37Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:26Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:37Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:47Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"10.0.16.3\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"34.125.116.29\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"nodes-us-west4-a-p9s4\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"d21d566bd59cff232d51aca3a5612b21\",\n                    \"systemUUID\": \"d21d566b-d59c-ff23-2d51-aca3a5612b21\",\n                    \"bootID\": \"4967f877-4e2f-4caf-aff4-bd5fa6109c5f\",\n                    \"kernelVersion\": \"5.11.0-1028-gcp\",\n                    \"osImage\": \"Ubuntu 20.04.3 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.6.6\",\n                    \"kubeletVersion\": \"v1.24.2\",\n                    \"kubeProxyVersion\": \"v1.24.2\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/cilium@sha256:79e66c3c2677e9ecc3fd5b2ed8e4ea7e49cf99ed6ee181f2ef43400c4db5eef0\",\n                            \"quay.io/cilium/cilium:v1.11.5\"\n                        ],\n                        \"sizeBytes\": 158523877\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.24.2\"\n                        ],\n                        \"sizeBytes\": 111851063\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver@sha256:b316059d0057e2bcb98d680feb99cd16e031260dd4ad0ce6c51ba8a28b48d9b7\",\n                            \"registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.3.4\"\n                        ],\n                        \"sizeBytes\": 69998657\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.39\"\n                        ],\n                        \"sizeBytes\": 51105200\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-2\"\n                        ],\n                        \"sizeBytes\": 41902010\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\"\n                        ],\n                        \"sizeBytes\": 40764680\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0\"\n                        ],\n                        \"sizeBytes\": 22728994\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.3.0\"\n                        ],\n                        \"sizeBytes\": 21671340\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:818f35653f2e214db81d655063e81995de9073328a3430498624c140881026a3\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.1\"\n                        ],\n                        \"sizeBytes\": 21564520\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.3.0\"\n                        ],\n                        \"sizeBytes\": 21444261\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727\",\n                            \"gcr.io/k8s-authenticated-test/agnhost:2.6\"\n                        ],\n                        \"sizeBytes\": 18352698\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3\"\n                        ],\n                        \"sizeBytes\": 15224494\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def\",\n                            \"registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4\"\n                        ],\n                        \"sizeBytes\": 15209393\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e\",\n                            \"registry.k8s.io/coredns/coredns:v1.8.6\"\n                        ],\n                        \"sizeBytes\": 13585107\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e\",\n                            \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\n                        ],\n                        \"sizeBytes\": 9515805\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0\"\n                        ],\n                        \"sizeBytes\": 8582494\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.4.0\"\n                        ],\n                        \"sizeBytes\": 7960518\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-2\"\n                        ],\n                        \"sizeBytes\": 6979041\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a\",\n                            \"k8s.gcr.io/metadata-proxy:v0.1.12\"\n                        ],\n                        \"sizeBytes\": 5301657\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/k8s-custom-iptables@sha256:8b1a0831e88973e2937eae3458edb470f20d54bf80d88b6a3355f36266e16ca5\",\n                            \"k8s.gcr.io/k8s-custom-iptables:1.0\"\n                        ],\n                        \"sizeBytes\": 3335579\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0\",\n                            \"gcr.io/authenticated-image-pulling/alpine:3.7\"\n                        ],\n                        \"sizeBytes\": 2110879\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-2\"\n                        ],\n                        \"sizeBytes\": 732424\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c\",\n                            \"k8s.gcr.io/pause:3.7\"\n                        ],\n                        \"sizeBytes\": 311278\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\",\n                            \"registry.k8s.io/pause:3.6\"\n                        ],\n                        \"sizeBytes\": 301773\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-boskos-gce-project-09/zones/us-west4-a/disks/pvc-971bf9bc-44ce-460b-9543-958139e00730\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-boskos-gce-project-09/zones/us-west4-a/disks/pvc-971bf9bc-44ce-460b-9543-958139e00730\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"nodes-us-west4-a-pdqm\",\n                \"uid\": \"46a22953-eff6-4ab5-b66d-47a60d6ec18c\",\n                \"resourceVersion\": \"15674\",\n                \"creationTimestamp\": \"2022-06-23T09:24:26Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"n1-standard-2\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"cloud.google.com/metadata-proxy-ready\": \"true\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"us-west4\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"us-west4-a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-us-west4-a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"nodes-us-west4-a-pdqm\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"n1-standard-2\",\n                    \"topology.gke.io/zone\": \"us-west4-a\",\n                    \"topology.hostpath.csi/node\": \"nodes-us-west4-a-pdqm\",\n                    \"topology.kubernetes.io/region\": \"us-west4\",\n                    \"topology.kubernetes.io/zone\": \"us-west4-a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"pd.csi.storage.gke.io\\\":\\\"projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-pdqm\\\"}\",\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.2.226\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.2.75\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.2.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.2.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.2.0/24\"\n                ],\n                \"providerID\": \"gce://k8s-boskos-gce-project-09/us-west4-a/nodes-us-west4-a-pdqm\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48600704Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"7629304Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44790408733\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"7526904Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:25:02Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:25:02Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:26Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:26Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:26Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:26Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:26Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:26Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:26Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:56Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"10.0.16.4\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"34.125.221.51\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"nodes-us-west4-a-pdqm\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"1b904f3f3083b3afd2ea4e372537e260\",\n                    \"systemUUID\": \"1b904f3f-3083-b3af-d2ea-4e372537e260\",\n                    \"bootID\": \"d5e18857-cb0b-45a7-8f43-f2bbe568d227\",\n                    \"kernelVersion\": \"5.11.0-1028-gcp\",\n                    \"osImage\": \"Ubuntu 20.04.3 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.6.6\",\n                    \"kubeletVersion\": \"v1.24.2\",\n                    \"kubeProxyVersion\": \"v1.24.2\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/cilium@sha256:79e66c3c2677e9ecc3fd5b2ed8e4ea7e49cf99ed6ee181f2ef43400c4db5eef0\",\n                            \"quay.io/cilium/cilium:v1.11.5\"\n                        ],\n                        \"sizeBytes\": 158523877\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.24.2\"\n                        ],\n                        \"sizeBytes\": 111851063\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5\",\n                            \"k8s.gcr.io/etcd:3.5.3-0\"\n                        ],\n                        \"sizeBytes\": 102143581\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver@sha256:b316059d0057e2bcb98d680feb99cd16e031260dd4ad0ce6c51ba8a28b48d9b7\",\n                            \"registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.3.4\"\n                        ],\n                        \"sizeBytes\": 69998657\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.39\"\n                        ],\n                        \"sizeBytes\": 51105200\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-2\"\n                        ],\n                        \"sizeBytes\": 41902010\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\"\n                        ],\n                        \"sizeBytes\": 40764680\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0\"\n                        ],\n                        \"sizeBytes\": 22728994\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.3.0\"\n                        ],\n                        \"sizeBytes\": 21671340\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:818f35653f2e214db81d655063e81995de9073328a3430498624c140881026a3\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.1\"\n                        ],\n                        \"sizeBytes\": 21564520\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.3.0\"\n                        ],\n                        \"sizeBytes\": 21444261\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf\",\n                            \"k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2\"\n                        ],\n                        \"sizeBytes\": 18651485\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727\",\n                            \"gcr.io/k8s-authenticated-test/agnhost:2.6\"\n                        ],\n                        \"sizeBytes\": 18352698\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937\",\n                            \"k8s.gcr.io/e2e-test-images/nonroot:1.2\"\n                        ],\n                        \"sizeBytes\": 17748301\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3\"\n                        ],\n                        \"sizeBytes\": 15224494\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e\",\n                            \"registry.k8s.io/coredns/coredns:v1.8.6\"\n                        ],\n                        \"sizeBytes\": 13585107\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e\",\n                            \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\n                        ],\n                        \"sizeBytes\": 9515805\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0\"\n                        ],\n                        \"sizeBytes\": 8582494\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.4.0\"\n                        ],\n                        \"sizeBytes\": 7960518\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-2\"\n                        ],\n                        \"sizeBytes\": 6979041\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a\",\n                            \"k8s.gcr.io/metadata-proxy:v0.1.12\"\n                        ],\n                        \"sizeBytes\": 5301657\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/k8s-custom-iptables@sha256:8b1a0831e88973e2937eae3458edb470f20d54bf80d88b6a3355f36266e16ca5\",\n                            \"k8s.gcr.io/k8s-custom-iptables:1.0\"\n                        ],\n                        \"sizeBytes\": 3335579\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-2\"\n                        ],\n                        \"sizeBytes\": 732424\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c\",\n                            \"k8s.gcr.io/pause:3.7\"\n                        ],\n                        \"sizeBytes\": 311278\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\",\n                            \"registry.k8s.io/pause:3.6\"\n                        ],\n                        \"sizeBytes\": 301773\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"nodes-us-west4-a-shvt\",\n                \"uid\": \"f6934226-b1b8-4d15-b07a-d9a81386b92a\",\n                \"resourceVersion\": \"13944\",\n                \"creationTimestamp\": \"2022-06-23T09:24:24Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"n1-standard-2\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"cloud.google.com/metadata-proxy-ready\": \"true\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"us-west4\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"us-west4-a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-us-west4-a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"nodes-us-west4-a-shvt\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"n1-standard-2\",\n                    \"topology.gke.io/zone\": \"us-west4-a\",\n                    \"topology.hostpath.csi/node\": \"nodes-us-west4-a-shvt\",\n                    \"topology.kubernetes.io/region\": \"us-west4\",\n                    \"topology.kubernetes.io/zone\": \"us-west4-a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"pd.csi.storage.gke.io\\\":\\\"projects/k8s-boskos-gce-project-09/zones/us-west4-a/instances/nodes-us-west4-a-shvt\\\"}\",\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.1.180\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.1.235\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.1.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.1.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.1.0/24\"\n                ],\n                \"providerID\": \"gce://k8s-boskos-gce-project-09/us-west4-a/nodes-us-west4-a-shvt\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48600704Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"7629296Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44790408733\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"7526896Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:25:09Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:25:09Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:05Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:24Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:05Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:24Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:05Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:24:24Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2022-06-23T09:32:05Z\",\n                        \"lastTransitionTime\": \"2022-06-23T09:25:05Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"10.0.16.2\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"34.125.209.197\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"nodes-us-west4-a-shvt\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"0b8f108d50a30cd2a2d2b8e5029390a9\",\n                    \"systemUUID\": \"0b8f108d-50a3-0cd2-a2d2-b8e5029390a9\",\n                    \"bootID\": \"a208692a-d718-4bba-9b57-81a7444a3c9a\",\n                    \"kernelVersion\": \"5.11.0-1028-gcp\",\n                    \"osImage\": \"Ubuntu 20.04.3 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.6.6\",\n                    \"kubeletVersion\": \"v1.24.2\",\n                    \"kubeProxyVersion\": \"v1.24.2\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/cilium@sha256:79e66c3c2677e9ecc3fd5b2ed8e4ea7e49cf99ed6ee181f2ef43400c4db5eef0\",\n                            \"quay.io/cilium/cilium:v1.11.5\"\n                        ],\n                        \"sizeBytes\": 158523877\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e\",\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5\"\n                        ],\n                        \"sizeBytes\": 112030526\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.24.2\"\n                        ],\n                        \"sizeBytes\": 111851063\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver@sha256:b316059d0057e2bcb98d680feb99cd16e031260dd4ad0ce6c51ba8a28b48d9b7\",\n                            \"registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.3.4\"\n                        ],\n                        \"sizeBytes\": 69998657\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.39\"\n                        ],\n                        \"sizeBytes\": 51105200\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-2\"\n                        ],\n                        \"sizeBytes\": 41902010\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\"\n                        ],\n                        \"sizeBytes\": 40764680\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0\"\n                        ],\n                        \"sizeBytes\": 22728994\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.3.0\"\n                        ],\n                        \"sizeBytes\": 21671340\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:818f35653f2e214db81d655063e81995de9073328a3430498624c140881026a3\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.1\"\n                        ],\n                        \"sizeBytes\": 21564520\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.3.0\"\n                        ],\n                        \"sizeBytes\": 21444261\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937\",\n                            \"k8s.gcr.io/e2e-test-images/nonroot:1.2\"\n                        ],\n                        \"sizeBytes\": 17748301\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3\"\n                        ],\n                        \"sizeBytes\": 15224494\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e\",\n                            \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\n                        ],\n                        \"sizeBytes\": 9515805\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0\"\n                        ],\n                        \"sizeBytes\": 8582494\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.4.0\"\n                        ],\n                        \"sizeBytes\": 7960518\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-2\"\n                        ],\n                        \"sizeBytes\": 6979041\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a\",\n                            \"k8s.gcr.io/metadata-proxy:v0.1.12\"\n                        ],\n                        \"sizeBytes\": 5301657\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/k8s-custom-iptables@sha256:8b1a0831e88973e2937eae3458edb470f20d54bf80d88b6a3355f36266e16ca5\",\n                            \"k8s.gcr.io/k8s-custom-iptables:1.0\"\n                        ],\n                        \"sizeBytes\": 3335579\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-2\"\n                        ],\n                        \"sizeBytes\": 732424\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c\",\n                            \"k8s.gcr.io/pause:3.7\"\n                        ],\n                        \"sizeBytes\": 311278\n                    },\n                    {\n                        \"names\": [\n                            \"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\",\n                            \"registry.k8s.io/pause:3.6\"\n                        ],\n                        \"sizeBytes\": 301773\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-boskos-gce-project-09/zones/us-west4-a/disks/pvc-3846ca96-1616-4532-834b-cd271e64f0e8\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-boskos-gce-project-09/zones/us-west4-a/disks/pvc-3846ca96-1616-4532-834b-cd271e64f0e8\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"7275\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"cilium-6wh2w.16fb35a404d525b6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f278b98a-48e0-430a-a1e9-8567415a54c4\",\n                \"resourceVersion\": \"207\",\n                \"creationTimestamp\": \"2022-06-23T09:24:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-6wh2w\",\n                \"uid\": \"ae6fa6f2-1bd5-4c1f-bd3c-1a84bde1906e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"619\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-6wh2w to nodes-us-west4-a-pdqm\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2022-06-23T09:24:26Z\",\n            \"lastTimestamp\": \"2022-06-23T09:24:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-6wh2w.16fb35a493152750\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e1fc812b-e152-4f98-a608-9dcf60f4be42\",\n                \"resourceVersion\": \"252\",\n                \"creationTimestamp\": \"2022-06-23T09:24:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-6wh2w\",\n                \"uid\": \"ae6fa6f2-1bd5-4c1f-bd3c-1a84bde1906e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"623\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/cilium/cilium:v1.11.5\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"nodes-us-west4-a-pdqm\"\n            },\n            \"firstTimestamp\": \"2022-06-23T09:24:28Z\",\n            \"lastTimestamp\": \"2022-06-23T09:24:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-6wh2w.16fb35a79c2b7ca2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1ac122c5-feac-47d3-961b-6a09ff7acedc\",\n                \"resourceVersion\": \"346\",\n                \"creationTimestamp\": \"2022-06-23T09:24:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-6wh2w\",\n                \"uid\": \"ae6fa6f2-1bd5-4c1f-bd3c-1a84bde1906e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"623\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/cilium/cilium:v1.11.5\\\" in 13.037336583s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"nodes-us-west4-a-pdqm\"\n            },\n            \"firstTimestamp\": \"2022-06-23T09:24:41Z\",\n            \"lastTimestamp\": \"2022-06-23T09:24:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-6wh2w.16fb35a79da238f9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"eb9a3fbf-54a9-48e5-9168-7ca432415571\",\n                \"resourceVersion\": \"347\",\n                \"creationTimestamp\": \"2022-06-23T09:24:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-6wh2w\",\n                \"uid\": \"ae6fa6f2-1bd5-4c1f-bd3c-1a84bde1906e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"623\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"nodes-us-west4-a-pdqm\"\n            },\n            \"firstTimestamp\": \"2022-06-23T09:24:42Z\",\n            \"lastTimestamp\": \"2022-06-23T09:24:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-6wh2w.16fb35a7a24dd0b3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c5be550e-af22-4da0-98c2-81e54f6b9368\",\n                \"resourceVersion\": \"348\",\n                \"creationTimestamp\": \"2022-06-23T09:24:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-6wh2w\",\n                \"uid\": \"ae6fa6f2-1bd5-4c1f-bd3c-1a84bde1906e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"623\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"nodes-us-west4-a-pdqm\"\n            },\n            \"firstTimestamp\": \"2022-06-23T09:24:42Z\",\n            \"lastTimestamp\": \"2022-06-23T09:24:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-6wh2w.16fb35a8c6523d24\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0c677f8b-485a-4710-8499-e36843ae948d\",\n                \"resourceVersion\": \"377\",\n                \"creationTimestamp\": \"2022-06-23T09:24:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-6wh2w\",\n                \"uid\": \"ae6fa6f2-1bd5-4c1f-bd3c-1a84bde1906e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"623\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/cilium/cilium:v1.11.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"nodes-us-west4-a-pdqm\"\n            },\n            \"firstTimestamp\": \"2022-06-23T09:24:46Z\",\n            \"lastTimestamp\": \"2022-06-23T09:24:46Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-6wh2w.16fb35a8c8a9775c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"62ad780b-2c1f-4f0b-ab3d-c1773590d4ef\",\n                \"resourceVersion\": \"378\",\n                \"creationTimestamp\": \"2022-06-23T09:24:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-6wh2w\",\n                \"uid\": \"ae6fa6f2-1bd5-4c1f-bd3c-1a84bde1906e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"623\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"nodes-us-west4-a-pdqm\"\n            },\n            \"firstTimestamp\": \"2022-06-23T09:24:47Z\",\n            \"lastTimestamp\": \"2022-06-23T09:24:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:01.919Z\",\"caller\":\"embed/etcd.go:581\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:01.919Z\",\"caller\":\"embed/etcd.go:553\",\"msg\":\"cmux::serve\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:01.919Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f switched to configuration voters=(10388124585224033663)\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:01.920Z\",\"caller\":\"membership/cluster.go:421\",\"msg\":\"added member\",\"cluster-id\":\"c46d6a73697e5a82\",\"local-member-id\":\"902a084588c7d57f\",\"added-peer-id\":\"902a084588c7d57f\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"]}\nI0623 09:23:02.055149    5402 gsfs.go:184] Writing file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:23:02.204564    5402 controller.go:187] starting controller iteration\nI0623 09:23:02.204692    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:02.205244    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:23:02.205784    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:02.206842    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995]\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.803Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.803Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f became pre-candidate at term 1\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.803Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f received MsgPreVoteResp from 902a084588c7d57f at term 1\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.803Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.803Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f received MsgVoteResp from 902a084588c7d57f at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.804Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.804Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"raft.node: 902a084588c7d57f elected leader 902a084588c7d57f at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.804Z\",\"caller\":\"etcdserver/server.go:2042\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"902a084588c7d57f\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995]}\",\"request-path\":\"/0/members/902a084588c7d57f/attributes\",\"cluster-id\":\"c46d6a73697e5a82\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.805Z\",\"caller\":\"etcdserver/server.go:2507\",\"msg\":\"setting up initial cluster version using v2 API\",\"cluster-version\":\"3.5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.805Z\",\"caller\":\"embed/serve.go:98\",\"msg\":\"ready to serve client requests\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.807Z\",\"caller\":\"embed/serve.go:188\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3995\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.807Z\",\"caller\":\"etcdmain/main.go:44\",\"msg\":\"notifying init daemon\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.807Z\",\"caller\":\"etcdmain/main.go:50\",\"msg\":\"successfully notified init daemon\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.818Z\",\"caller\":\"membership/cluster.go:584\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"c46d6a73697e5a82\",\"local-member-id\":\"902a084588c7d57f\",\"cluster-version\":\"3.5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.818Z\",\"caller\":\"api/capability.go:75\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.818Z\",\"caller\":\"etcdserver/server.go:2531\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.5\"}\nI0623 09:23:02.856549    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" quarantined:true > }\nI0623 09:23:02.856843    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:23:02.857340    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:23:02.857953    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:02.857979    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:02.858049    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:02.858157    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:02.858171    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:23:02.922083    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:02.923038    5402 backup.go:128] performing snapshot save to /tmp/1226014795/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.936Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:211\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.939Z\",\"caller\":\"v3rpc/maintenance.go:125\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.942Z\",\"caller\":\"v3rpc/maintenance.go:165\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.942Z\",\"caller\":\"v3rpc/maintenance.go:174\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:02.943Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:219\",\"msg\":\"completed snapshot read; closing\"}\nI0623 09:23:02.943745    5402 gsfs.go:184] Writing file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/2022-06-23T09:23:02Z-000001/etcd.backup.gz\"\nI0623 09:23:03.120949    5402 gsfs.go:184] Writing file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/2022-06-23T09:23:02Z-000001/_etcd_backup.meta\"\nI0623 09:23:03.293977    5402 backup.go:153] backup complete: name:\"2022-06-23T09:23:02Z-000001\" \nI0623 09:23:03.294655    5402 controller.go:931] backup response: name:\"2022-06-23T09:23:02Z-000001\" \nI0623 09:23:03.294675    5402 controller.go:574] took backup: name:\"2022-06-23T09:23:02Z-000001\" \nI0623 09:23:03.357536    5402 vfs.go:118] listed backups in gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events: [2022-06-23T09:23:02Z-000001]\nI0623 09:23:03.357857    5402 cleanup.go:166] retaining backup \"2022-06-23T09:23:02Z-000001\"\nI0623 09:23:03.358022    5402 restore.go:98] Setting quarantined state to false\nI0623 09:23:03.358610    5402 etcdserver.go:397] Reconfigure request: header:<leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" cluster_name:\"etcd-events\" > \nI0623 09:23:03.358896    5402 etcdserver.go:440] Stopping etcd for reconfigure request: header:<leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" cluster_name:\"etcd-events\" > \nI0623 09:23:03.359033    5402 etcdserver.go:644] killing etcd with datadir /rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/55_GJFLFeFHsPvHfe5Baqw\nI0623 09:23:03.359324    5402 etcdprocess.go:136] Waiting for etcd to exit\nI0623 09:23:03.360855    5402 etcdprocess.go:331] etcd process exited (datadir /rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/55_GJFLFeFHsPvHfe5Baqw; pid=5761); exitCode=-1, exitErr=<nil>\nI0623 09:23:03.459792    5402 etcdprocess.go:136] Waiting for etcd to exit\nI0623 09:23:03.459868    5402 etcdprocess.go:141] Exited etcd: signal: killed\nI0623 09:23:03.460090    5402 etcdserver.go:447] updated cluster state: cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" \nI0623 09:23:03.460463    5402 etcdserver.go:452] Starting etcd version \"3.5.4\"\nI0623 09:23:03.460538    5402 etcdserver.go:560] starting etcd with state cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" \nI0623 09:23:03.460644    5402 etcdserver.go:569] starting etcd with datadir /rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/55_GJFLFeFHsPvHfe5Baqw\nI0623 09:23:03.461149    5402 pki.go:58] adding peerClientIPs [10.0.16.6]\nI0623 09:23:03.461247    5402 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local] IPs:[10.0.16.6 127.0.0.1 ::1]} Usages:[2 1]}\nI0623 09:23:03.461597    5402 certs.go:151] existing certificate not valid after 2024-06-22T09:23:01Z; will regenerate\nI0623 09:23:03.461717    5402 certs.go:211] generating certificate for \"etcd-events-a\"\nI0623 09:23:03.465611    5402 pki.go:108] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local] IPs:[127.0.0.1 ::1]} Usages:[1 2]}\nI0623 09:23:03.465903    5402 certs.go:151] existing certificate not valid after 2024-06-22T09:23:01Z; will regenerate\nI0623 09:23:03.465946    5402 certs.go:211] generating certificate for \"etcd-events-a\"\nI0623 09:23:03.569152    5402 certs.go:211] generating certificate for \"etcd-events-a\"\nI0623 09:23:03.571780    5402 etcdprocess.go:210] executing command /opt/etcd-v3.5.4-linux-amd64/etcd [/opt/etcd-v3.5.4-linux-amd64/etcd]\nI0623 09:23:03.572277    5402 etcdprocess.go:315] started etcd with datadir /rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/55_GJFLFeFHsPvHfe5Baqw; pid=5852\nI0623 09:23:03.572973    5402 restore.go:116] ReconfigureResponse: \nI0623 09:23:03.574321    5402 controller.go:187] starting controller iteration\nI0623 09:23:03.574350    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:03.574801    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:23:03.575086    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:03.575826    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.594Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_ADVERTISE_CLIENT_URLS\",\"variable-value\":\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.594Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_CERT_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/clients/server.crt\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.594Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_CLIENT_CERT_AUTH\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.594Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_DATA_DIR\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/55_GJFLFeFHsPvHfe5Baqw\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.594Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_ENABLE_V2\",\"variable-value\":\"false\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.594Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_EXPERIMENTAL_INITIAL_CORRUPT_CHECK\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.594Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_ADVERTISE_PEER_URLS\",\"variable-value\":\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.594Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_CLUSTER\",\"variable-value\":\"etcd-events-a=https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.594Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_CLUSTER_STATE\",\"variable-value\":\"existing\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.594Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_CLUSTER_TOKEN\",\"variable-value\":\"55_GJFLFeFHsPvHfe5Baqw\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.594Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_KEY_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/clients/server.key\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LISTEN_CLIENT_URLS\",\"variable-value\":\"https://0.0.0.0:4002\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LISTEN_PEER_URLS\",\"variable-value\":\"https://0.0.0.0:2381\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LOG_OUTPUTS\",\"variable-value\":\"stdout\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LOGGER\",\"variable-value\":\"zap\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_NAME\",\"variable-value\":\"etcd-events-a\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_CERT_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/peers/me.crt\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_CLIENT_CERT_AUTH\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_KEY_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/peers/me.key\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_TRUSTED_CA_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/peers/ca.crt\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_STRICT_RECONFIG_CHECK\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_TRUSTED_CA_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/clients/ca.crt\"}\n{\"level\":\"warn\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCD_LISTEN_METRICS_URLS=\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"etcdmain/etcd.go:73\",\"msg\":\"Running: \",\"args\":[\"/opt/etcd-v3.5.4-linux-amd64/etcd\"]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"etcdmain/etcd.go:116\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/55_GJFLFeFHsPvHfe5Baqw\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"embed/etcd.go:131\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.595Z\",\"caller\":\"embed/etcd.go:479\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/peers/me.crt, key = /rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/peers/me.key, client-cert=, client-key=, trusted-ca = /rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.596Z\",\"caller\":\"embed/etcd.go:139\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4002\"]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.596Z\",\"caller\":\"embed/etcd.go:308\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.5.4\",\"git-sha\":\"08407ff76\",\"go-version\":\"go1.16.15\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/55_GJFLFeFHsPvHfe5Baqw\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/55_GJFLFeFHsPvHfe5Baqw/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":true,\"initial-corrupt-check\":true,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\",\"downgrade-check-interval\":\"5s\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.597Z\",\"caller\":\"etcdserver/backend.go:81\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/55_GJFLFeFHsPvHfe5Baqw/member/snap/db\",\"took\":\"219.498µs\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.598Z\",\"caller\":\"etcdserver/server.go:529\",\"msg\":\"No snapshot found. Recovering WAL from scratch!\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.598Z\",\"caller\":\"etcdserver/raft.go:483\",\"msg\":\"restarting local member\",\"cluster-id\":\"c46d6a73697e5a82\",\"local-member-id\":\"902a084588c7d57f\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.598Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.598Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.598Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"newRaft 902a084588c7d57f [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2022-06-23T09:23:03.600Z\",\"caller\":\"auth/store.go:1220\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.602Z\",\"caller\":\"mvcc/kvstore.go:415\",\"msg\":\"kvstore restored\",\"current-rev\":1}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.603Z\",\"caller\":\"etcdserver/quota.go:94\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.605Z\",\"caller\":\"etcdserver/corrupt.go:46\",\"msg\":\"starting initial corruption check\",\"local-member-id\":\"902a084588c7d57f\",\"timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.605Z\",\"caller\":\"etcdserver/corrupt.go:116\",\"msg\":\"initial corruption checking passed; no corruption\",\"local-member-id\":\"902a084588c7d57f\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.605Z\",\"caller\":\"etcdserver/server.go:851\",\"msg\":\"starting etcd server\",\"local-member-id\":\"902a084588c7d57f\",\"local-server-version\":\"3.5.4\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.605Z\",\"caller\":\"etcdserver/server.go:752\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.606Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f switched to configuration voters=(10388124585224033663)\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.608Z\",\"caller\":\"membership/cluster.go:421\",\"msg\":\"added member\",\"cluster-id\":\"c46d6a73697e5a82\",\"local-member-id\":\"902a084588c7d57f\",\"added-peer-id\":\"902a084588c7d57f\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.608Z\",\"caller\":\"membership/cluster.go:584\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"c46d6a73697e5a82\",\"local-member-id\":\"902a084588c7d57f\",\"cluster-version\":\"3.5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.608Z\",\"caller\":\"api/capability.go:75\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.610Z\",\"caller\":\"embed/etcd.go:688\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/clients/server.crt, key = /rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/clients/server.key, client-cert=, client-key=, trusted-ca = /rootfs/mnt/master-us-west4-a-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/55_GJFLFeFHsPvHfe5Baqw/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.610Z\",\"caller\":\"embed/etcd.go:277\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"902a084588c7d57f\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.610Z\",\"caller\":\"embed/etcd.go:581\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:03.611Z\",\"caller\":\"embed/etcd.go:553\",\"msg\":\"cmux::serve\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.000Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.001Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f became pre-candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.001Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f received MsgPreVoteResp from 902a084588c7d57f at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.001Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.002Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f received MsgVoteResp from 902a084588c7d57f at term 3\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.002Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"902a084588c7d57f became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.002Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"raft.node: 902a084588c7d57f elected leader 902a084588c7d57f at term 3\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.011Z\",\"caller\":\"etcdserver/server.go:2042\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"902a084588c7d57f\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]}\",\"request-path\":\"/0/members/902a084588c7d57f/attributes\",\"cluster-id\":\"c46d6a73697e5a82\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.011Z\",\"caller\":\"embed/serve.go:98\",\"msg\":\"ready to serve client requests\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.013Z\",\"caller\":\"embed/serve.go:188\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4002\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.014Z\",\"caller\":\"etcdmain/main.go:44\",\"msg\":\"notifying init daemon\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.015Z\",\"caller\":\"etcdmain/main.go:50\",\"msg\":\"successfully notified init daemon\"}\nI0623 09:23:05.047303    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:05.048182    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:23:05.048455    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:23:05.048992    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:05.049112    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:05.049297    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:05.049866    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:05.050140    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:23:05.122321    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:05.122599    5402 controller.go:555] controller loop complete\nI0623 09:23:15.123978    5402 controller.go:187] starting controller iteration\nI0623 09:23:15.124014    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:15.124778    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:23:15.125249    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:15.127508    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:23:15.147074    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:15.147198    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:23:15.147219    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:23:15.147583    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:15.147603    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:15.147757    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:15.147886    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:15.147910    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:23:15.212006    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:15.212167    5402 controller.go:555] controller loop complete\nI0623 09:23:25.214266    5402 controller.go:187] starting controller iteration\nI0623 09:23:25.214309    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:25.214814    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:23:25.215193    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:25.216131    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:23:25.234819    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:25.234967    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:23:25.235072    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:23:25.235523    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:25.235558    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:25.235637    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:25.235760    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:25.235775    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:23:25.306547    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:25.306954    5402 controller.go:555] controller loop complete\nI0623 09:23:35.308446    5402 controller.go:187] starting controller iteration\nI0623 09:23:35.308480    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:35.309087    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:23:35.309345    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:35.310160    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:23:35.327525    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:35.327706    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:23:35.328120    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:23:35.328606    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:35.328685    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:35.328921    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:35.329277    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:35.329500    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:23:35.418661    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:35.418884    5402 controller.go:555] controller loop complete\nI0623 09:23:45.420174    5402 controller.go:187] starting controller iteration\nI0623 09:23:45.420205    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:45.420749    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:23:45.421080    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:45.422021    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:23:45.439826    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:45.440040    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:23:45.440141    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:23:45.440552    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:45.440577    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:45.440750    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:45.441187    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:45.441210    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:23:45.529034    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:45.529152    5402 controller.go:555] controller loop complete\nI0623 09:23:49.663759    5402 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:23:49.888048    5402 volumes.go:234] volume \"a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:23:50.410045    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:50.410421    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:55.530877    5402 controller.go:187] starting controller iteration\nI0623 09:23:55.530911    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:55.531237    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:23:55.531413    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:23:55.531912    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:23:55.562684    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:55.562910    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:23:55.562937    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:23:55.563525    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:55.563547    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:55.563724    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:55.564120    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:55.564139    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:23:55.638803    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:55.638938    5402 controller.go:555] controller loop complete\nI0623 09:24:05.640733    5402 controller.go:187] starting controller iteration\nI0623 09:24:05.640776    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:05.641136    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:24:05.642276    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:05.643478    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:24:05.713176    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:05.713516    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:24:05.713592    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:24:05.714520    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:05.714639    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:05.714840    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:05.715135    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:05.715194    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:24:05.782008    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:05.782238    5402 controller.go:555] controller loop complete\nI0623 09:24:15.784344    5402 controller.go:187] starting controller iteration\nI0623 09:24:15.784387    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:15.784955    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:24:15.785216    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:15.786125    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:24:15.804816    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:15.804941    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:24:15.804981    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:24:15.805463    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:15.805498    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:15.805734    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:15.805931    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:15.805955    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:24:15.900969    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:15.901092    5402 controller.go:555] controller loop complete\nI0623 09:24:25.904177    5402 controller.go:187] starting controller iteration\nI0623 09:24:25.904228    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:25.905157    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:24:25.905884    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:25.912750    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:24:26.036576    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:26.036726    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:24:26.036751    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:24:26.037165    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:26.037186    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:26.037264    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:26.039000    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:26.039023    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:24:26.124438    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:26.124680    5402 controller.go:555] controller loop complete\nI0623 09:24:36.128801    5402 controller.go:187] starting controller iteration\nI0623 09:24:36.128835    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:36.129782    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:24:36.131049    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:36.134173    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:24:36.213759    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:36.213899    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:24:36.213922    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:24:36.214200    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:36.214216    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:36.214282    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:36.214378    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:36.214390    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:24:36.273904    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:36.274007    5402 controller.go:555] controller loop complete\nI0623 09:24:46.275292    5402 controller.go:187] starting controller iteration\nI0623 09:24:46.275338    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:46.275680    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:24:46.275872    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:46.277421    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:24:46.338214    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:46.338348    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:24:46.338370    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:24:46.338665    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:46.338682    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:46.338745    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:46.338861    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:46.338872    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:24:46.412249    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:46.412350    5402 controller.go:555] controller loop complete\nI0623 09:24:50.411623    5402 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:24:50.648874    5402 volumes.go:234] volume \"a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:24:51.171823    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:51.181969    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:56.420055    5402 controller.go:187] starting controller iteration\nI0623 09:24:56.420116    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:56.420539    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:24:56.420770    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:24:56.421537    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:24:56.480279    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:56.480437    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:24:56.480465    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:24:56.480862    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:56.480883    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:56.480989    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:56.481137    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:56.481156    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:24:56.548289    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:56.548438    5402 controller.go:555] controller loop complete\nI0623 09:25:06.549791    5402 controller.go:187] starting controller iteration\nI0623 09:25:06.549837    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:06.550168    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:25:06.550312    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:06.550937    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:25:06.573931    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:25:06.574124    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:25:06.574199    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:25:06.574908    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:06.574949    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:06.575143    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:25:06.575472    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:25:06.575499    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:25:06.643197    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:25:06.643364    5402 controller.go:555] controller loop complete\nI0623 09:25:16.645672    5402 controller.go:187] starting controller iteration\nI0623 09:25:16.645722    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:16.646139    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:25:16.646401    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:16.647002    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:25:16.665631    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:25:16.665734    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:25:16.665750    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:25:16.665906    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:16.665921    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:16.665979    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:25:16.666051    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:25:16.666063    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:25:16.726034    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:25:16.726191    5402 controller.go:555] controller loop complete\nI0623 09:25:26.728927    5402 controller.go:187] starting controller iteration\nI0623 09:25:26.728978    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:26.729589    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:25:26.729881    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:26.730679    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:25:26.747767    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:25:26.747916    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:25:26.747965    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:25:26.748537    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:26.748567    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:26.748649    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:25:26.748925    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:25:26.748952    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:25:26.820076    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:25:26.820195    5402 controller.go:555] controller loop complete\nI0623 09:25:36.822217    5402 controller.go:187] starting controller iteration\nI0623 09:25:36.822261    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:36.822840    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:25:36.824325    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:36.826011    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:25:36.853270    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:25:36.853450    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:25:36.853478    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:25:36.854024    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:36.854051    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:36.854277    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:25:36.854507    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:25:36.854528    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:25:36.920728    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:25:36.920850    5402 controller.go:555] controller loop complete\nI0623 09:25:46.922113    5402 controller.go:187] starting controller iteration\nI0623 09:25:46.922147    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:46.922478    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:25:46.922633    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:46.923169    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:25:46.943300    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:25:46.943449    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:25:46.943472    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:25:46.944090    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:46.944128    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:46.944273    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:25:46.944458    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:25:46.944483    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:25:47.036590    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:25:47.036899    5402 controller.go:555] controller loop complete\nI0623 09:25:51.182146    5402 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:25:51.415254    5402 volumes.go:234] volume \"a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:25:51.977238    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:51.977506    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:25:57.039684    5402 controller.go:187] starting controller iteration\nI0623 09:25:57.039723    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:57.040062    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:25:57.040487    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:25:57.044222    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:25:57.064312    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:25:57.064452    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:25:57.064474    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:25:57.065247    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:57.065295    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:25:57.065497    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:25:57.065769    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:25:57.065794    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:25:57.145777    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:25:57.145933    5402 controller.go:555] controller loop complete\nI0623 09:26:07.149554    5402 controller.go:187] starting controller iteration\nI0623 09:26:07.149806    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:07.151519    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:26:07.152096    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:07.153175    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:26:07.172436    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:26:07.172566    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:26:07.172587    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:26:07.173072    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:07.173101    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:07.173296    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:26:07.173490    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:26:07.173514    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:26:07.242028    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:26:07.242392    5402 controller.go:555] controller loop complete\nI0623 09:26:17.243649    5402 controller.go:187] starting controller iteration\nI0623 09:26:17.243689    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:17.244242    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:26:17.244494    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:17.245342    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:26:17.270738    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:26:17.270857    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:26:17.270877    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:26:17.271398    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:17.271443    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:17.271669    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:26:17.271871    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:26:17.271889    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:26:17.334964    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:26:17.335119    5402 controller.go:555] controller loop complete\nI0623 09:26:27.337132    5402 controller.go:187] starting controller iteration\nI0623 09:26:27.337160    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:27.337412    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:26:27.337531    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:27.337970    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:26:27.356916    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:26:27.357028    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:26:27.357050    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:26:27.357899    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:27.357922    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:27.358111    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:26:27.358356    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:26:27.358375    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:26:27.419468    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:26:27.419802    5402 controller.go:555] controller loop complete\nI0623 09:26:37.421125    5402 controller.go:187] starting controller iteration\nI0623 09:26:37.421177    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:37.421945    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:26:37.422261    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:37.423214    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:26:37.447622    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:26:37.447737    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:26:37.447760    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:26:37.448332    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:37.448361    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:37.448573    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:26:37.448792    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:26:37.448813    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:26:37.510089    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:26:37.510453    5402 controller.go:555] controller loop complete\nI0623 09:26:47.513295    5402 controller.go:187] starting controller iteration\nI0623 09:26:47.513343    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:47.513663    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:26:47.513849    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:47.514480    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:26:47.533757    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:26:47.534156    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:26:47.534337    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:26:47.534761    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:47.534904    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:47.535159    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:26:47.535373    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:26:47.535395    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:26:47.596589    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:26:47.596978    5402 controller.go:555] controller loop complete\nI0623 09:26:51.979098    5402 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:26:52.205355    5402 volumes.go:234] volume \"a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:26:52.818976    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:52.819189    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:26:57.601125    5402 controller.go:187] starting controller iteration\nI0623 09:26:57.601180    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:57.601796    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:26:57.602140    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:26:57.603210    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:26:57.623001    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:26:57.623147    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:26:57.623171    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:26:57.623714    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:57.623732    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:26:57.623795    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:26:57.623902    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:26:57.623914    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:26:57.708783    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:26:57.708944    5402 controller.go:555] controller loop complete\nI0623 09:27:07.711543    5402 controller.go:187] starting controller iteration\nI0623 09:27:07.711581    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:07.712031    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:27:07.712244    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:07.712798    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:27:07.731270    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:27:07.731654    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:27:07.731807    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:27:07.732353    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:07.732424    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:07.732548    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:27:07.732728    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:27:07.732751    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:27:07.798993    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:27:07.799144    5402 controller.go:555] controller loop complete\nI0623 09:27:17.801788    5402 controller.go:187] starting controller iteration\nI0623 09:27:17.801823    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:17.802084    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:27:17.802231    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:17.802746    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:27:17.820970    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:27:17.821103    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:27:17.821125    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:27:17.821718    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:17.821746    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:17.822090    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:27:17.822732    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:27:17.822879    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:27:17.970897    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:27:17.971344    5402 controller.go:555] controller loop complete\nI0623 09:27:27.975246    5402 controller.go:187] starting controller iteration\nI0623 09:27:27.975289    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:27.975878    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:27:27.976275    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:27.977956    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:27:27.998397    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:27:27.998528    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:27:27.998549    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:27:27.999196    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:27.999227    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:27.999578    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:27:27.999783    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:27:27.999808    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:27:28.069681    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:27:28.069816    5402 controller.go:555] controller loop complete\nI0623 09:27:38.073682    5402 controller.go:187] starting controller iteration\nI0623 09:27:38.073750    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:38.074115    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:27:38.074420    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:38.075047    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:27:38.093485    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:27:38.093702    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:27:38.093757    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:27:38.094157    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:38.094188    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:38.094265    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:27:38.094382    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:27:38.094402    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:27:38.159625    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:27:38.159996    5402 controller.go:555] controller loop complete\nI0623 09:27:48.163017    5402 controller.go:187] starting controller iteration\nI0623 09:27:48.163058    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:48.163434    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:27:48.163672    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:48.169293    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:27:48.269219    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:27:48.269693    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:27:48.269864    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:27:48.270849    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:48.270877    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:48.270977    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:27:48.272134    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:27:48.272152    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:27:48.331740    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:27:48.331874    5402 controller.go:555] controller loop complete\nI0623 09:27:52.820867    5402 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:27:53.010356    5402 volumes.go:234] volume \"a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:27:53.620846    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:53.620984    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:27:58.335327    5402 controller.go:187] starting controller iteration\nI0623 09:27:58.335372    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:58.336167    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:27:58.336404    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:27:58.337113    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:27:58.363474    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:27:58.363694    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:27:58.363720    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:27:58.364320    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:58.364352    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:27:58.364501    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:27:58.364725    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:27:58.364774    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:27:58.425576    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:27:58.425745    5402 controller.go:555] controller loop complete\nI0623 09:28:08.430228    5402 controller.go:187] starting controller iteration\nI0623 09:28:08.430285    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:08.430729    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:28:08.431512    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:08.433507    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:28:08.471586    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:28:08.473738    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:28:08.473971    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:28:08.474437    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:08.474466    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:08.474571    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:28:08.474727    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:28:08.474744    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:28:08.550746    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:28:08.550888    5402 controller.go:555] controller loop complete\nI0623 09:28:18.553130    5402 controller.go:187] starting controller iteration\nI0623 09:28:18.553175    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:18.553486    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:28:18.553689    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:18.554806    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:28:18.573725    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:28:18.573853    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:28:18.573877    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:28:18.574398    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:18.574432    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:18.574626    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:28:18.574862    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:28:18.574884    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:28:18.654245    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:28:18.654346    5402 controller.go:555] controller loop complete\nI0623 09:28:28.658201    5402 controller.go:187] starting controller iteration\nI0623 09:28:28.658241    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:28.658517    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:28:28.658707    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:28.662327    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:28:28.692209    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:28:28.692511    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:28:28.692852    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:28:28.693258    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:28.693310    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:28.693594    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:28:28.693877    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:28:28.693895    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:28:28.752945    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:28:28.753072    5402 controller.go:555] controller loop complete\nI0623 09:28:38.760037    5402 controller.go:187] starting controller iteration\nI0623 09:28:38.760081    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:38.760427    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:28:38.760626    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:38.761323    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:28:38.810317    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:28:38.810494    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:28:38.810520    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:28:38.810916    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:38.810939    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:38.811051    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:28:38.811188    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:28:38.811204    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:28:38.886573    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:28:38.886696    5402 controller.go:555] controller loop complete\nI0623 09:28:48.888851    5402 controller.go:187] starting controller iteration\nI0623 09:28:48.889214    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:48.889778    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:28:48.890189    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:48.891302    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:28:48.923338    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:28:48.923699    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:28:48.923827    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:28:48.925774    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:48.925899    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:48.926299    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:28:48.926603    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:28:48.926720    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:28:48.997965    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:28:48.998181    5402 controller.go:555] controller loop complete\nI0623 09:28:53.624284    5402 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:28:53.808579    5402 volumes.go:234] volume \"a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:28:54.325113    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:54.325249    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:28:59.000081    5402 controller.go:187] starting controller iteration\nI0623 09:28:59.000130    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:59.000528    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:28:59.000746    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:28:59.001474    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:28:59.064268    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:28:59.064438    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:28:59.064466    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:28:59.064908    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:59.064931    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:28:59.065029    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:28:59.070073    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:28:59.070103    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:28:59.134003    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:28:59.134154    5402 controller.go:555] controller loop complete\nI0623 09:29:09.135786    5402 controller.go:187] starting controller iteration\nI0623 09:29:09.135832    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:09.136198    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:29:09.136388    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:09.137019    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:29:09.207332    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:29:09.209634    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:29:09.209877    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:29:09.210873    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:09.212053    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:09.212310    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:29:09.214309    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:29:09.214331    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:29:09.275066    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:29:09.275224    5402 controller.go:555] controller loop complete\nI0623 09:29:19.281145    5402 controller.go:187] starting controller iteration\nI0623 09:29:19.281187    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:19.281555    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:29:19.281760    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:19.282457    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:29:19.341025    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:29:19.341177    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:29:19.341203    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:29:19.341897    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:19.342110    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:19.344831    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:29:19.352696    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:29:19.352979    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:29:19.419659    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:29:19.419781    5402 controller.go:555] controller loop complete\nI0623 09:29:29.421284    5402 controller.go:187] starting controller iteration\nI0623 09:29:29.421346    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:29.422103    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:29:29.423223    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:29.425967    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:29:29.457079    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:29:29.458558    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:29:29.458590    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:29:29.459087    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:29.459110    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:29.459216    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:29:29.459403    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:29:29.459439    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:29:29.528299    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:29:29.528429    5402 controller.go:555] controller loop complete\nI0623 09:29:39.530078    5402 controller.go:187] starting controller iteration\nI0623 09:29:39.530130    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:39.530421    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:29:39.530749    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:39.532777    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:29:39.568459    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:29:39.568584    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:29:39.568606    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:29:39.569534    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:39.569558    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:39.569629    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:29:39.569762    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:29:39.569775    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:29:39.631442    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:29:39.631587    5402 controller.go:555] controller loop complete\nI0623 09:29:49.633232    5402 controller.go:187] starting controller iteration\nI0623 09:29:49.633277    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:49.634029    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:29:49.634262    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:49.635446    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:29:49.674441    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:29:49.675551    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:29:49.675579    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:29:49.676244    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:49.676271    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:49.676388    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:29:49.676654    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:29:49.676753    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:29:49.739169    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:29:49.739568    5402 controller.go:555] controller loop complete\nI0623 09:29:54.329815    5402 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:29:54.505705    5402 volumes.go:234] volume \"a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:29:55.131329    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:55.131466    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:29:59.741472    5402 controller.go:187] starting controller iteration\nI0623 09:29:59.741521    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:59.742735    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:29:59.743277    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:29:59.744614    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:29:59.776514    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:29:59.776862    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:29:59.777023    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:29:59.777417    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:59.777456    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:29:59.777547    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:29:59.778041    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:29:59.778063    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:29:59.837763    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:29:59.837863    5402 controller.go:555] controller loop complete\nI0623 09:30:09.839861    5402 controller.go:187] starting controller iteration\nI0623 09:30:09.839963    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:30:09.840346    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:30:09.840575    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:30:09.843826    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:30:09.879712    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:30:09.879860    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:30:09.879885    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:30:09.880219    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:30:09.880242    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:30:09.880331    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:30:09.880612    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:30:09.880655    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:30:09.945753    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:30:09.945880    5402 controller.go:555] controller loop complete\nI0623 09:30:19.947635    5402 controller.go:187] starting controller iteration\nI0623 09:30:19.947695    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:30:19.948171    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:30:19.948446    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:30:19.950699    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:30:19.989472    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:30:19.993566    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:30:19.993611    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:30:19.994231    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:30:19.994259    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:30:19.994577    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:30:19.994824    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:30:19.994846    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:30:20.060829    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:30:20.060974    5402 controller.go:555] controller loop complete\nI0623 09:30:30.064032    5402 controller.go:187] starting controller iteration\nI0623 09:30:30.064086    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:30:30.064456    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:30:30.065029    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:30:30.065876    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:30:30.114697    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:30:30.114857    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:30:30.114879    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:30:30.115245    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:30:30.115265    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:30:30.115350    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:30:30.115490    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:30:30.115505    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:30:30.181818    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:30:30.181940    5402 controller.go:555] controller loop complete\nI0623 09:30:40.184183    5402 controller.go:187] starting controller iteration\nI0623 09:30:40.184229    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:30:40.184614    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:30:40.184857    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:30:40.187620    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:30:40.225083    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:30:40.228222    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:30:40.228285    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:30:40.231664    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:30:40.231697    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:30:40.232127    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:30:40.232311    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:30:40.232328    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:30:40.300459    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:30:40.300616    5402 controller.go:555] controller loop complete\nI0623 09:30:50.302602    5402 controller.go:187] starting controller iteration\nI0623 09:30:50.302642    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:30:50.303389    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:30:50.303767    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:30:50.305261    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:30:50.338183    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:30:50.338412    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:30:50.338464    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:30:50.338758    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:30:50.338830    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:30:50.338963    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:30:50.339264    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:30:50.339290    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:30:50.402602    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:30:50.402802    5402 controller.go:555] controller loop complete\nI0623 09:30:55.132644    5402 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:30:55.373378    5402 volumes.go:234] volume \"a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:30:55.895994    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:30:55.896427    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:31:00.404812    5402 controller.go:187] starting controller iteration\nI0623 09:31:00.404861    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:00.405285    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:31:00.405495    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:00.406209    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:31:00.439003    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:31:00.441034    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:31:00.441288    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:31:00.442326    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:00.442352    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:00.442441    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:31:00.442576    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:31:00.442593    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:31:00.498256    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:31:00.498384    5402 controller.go:555] controller loop complete\nI0623 09:31:10.503613    5402 controller.go:187] starting controller iteration\nI0623 09:31:10.503667    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:10.504036    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:31:10.504241    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:10.505007    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:31:10.567399    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:31:10.567617    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:31:10.567646    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:31:10.568346    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:10.568398    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:10.568515    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:31:10.569037    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:31:10.569062    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:31:10.641933    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:31:10.642064    5402 controller.go:555] controller loop complete\nI0623 09:31:20.646430    5402 controller.go:187] starting controller iteration\nI0623 09:31:20.646474    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:20.647888    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:31:20.648158    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:20.650309    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:31:20.675770    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:31:20.676237    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:31:20.676276    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:31:20.676564    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:20.676591    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:20.676679    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:31:20.676820    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:31:20.676845    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:31:20.744657    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:31:20.745257    5402 controller.go:555] controller loop complete\nI0623 09:31:30.747291    5402 controller.go:187] starting controller iteration\nI0623 09:31:30.747349    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:30.747674    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:31:30.747889    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:30.748638    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:31:30.770506    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:31:30.770632    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:31:30.770655    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:31:30.773662    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:30.773689    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:30.773771    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:31:30.773913    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:31:30.773963    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:31:30.837931    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:31:30.838060    5402 controller.go:555] controller loop complete\nI0623 09:31:40.839367    5402 controller.go:187] starting controller iteration\nI0623 09:31:40.839424    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:40.840081    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:31:40.840530    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:40.841374    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:31:40.865027    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:31:40.865253    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:31:40.865278    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:31:40.865552    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:40.865571    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:40.865668    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:31:40.865857    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:31:40.865931    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:31:40.926922    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:31:40.927154    5402 controller.go:555] controller loop complete\nI0623 09:31:50.929247    5402 controller.go:187] starting controller iteration\nI0623 09:31:50.929290    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:50.930153    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:31:50.931237    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:31:50.936037    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:31:50.964710    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:31:50.964927    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:31:50.964955    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:31:50.965591    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:50.965623    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:50.965703    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:31:50.965837    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:31:50.965852    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:31:51.031765    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:31:51.031889    5402 controller.go:555] controller loop complete\nI0623 09:31:55.897328    5402 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:31:56.104978    5402 volumes.go:234] volume \"a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-events-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:31:56.666751    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:31:56.666869    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:32:01.033120    5402 controller.go:187] starting controller iteration\nI0623 09:32:01.033156    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:32:01.033637    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:32:01.034004    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:32:01.034542    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:32:01.061455    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:32:01.061631    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:32:01.061655    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:32:01.061902    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:32:01.061919    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:32:01.062015    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:32:01.062734    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:32:01.062760    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:32:01.130407    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:32:01.130505    5402 controller.go:555] controller loop complete\nI0623 09:32:11.131766    5402 controller.go:187] starting controller iteration\nI0623 09:32:11.131820    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:32:11.132230    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:32:11.132454    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:32:11.137679    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:32:11.196781    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:32:11.197181    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:32:11.197336    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:32:11.197882    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:32:11.198222    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:32:11.198523    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:32:11.198821    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:32:11.198944    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:32:11.261251    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:32:11.261379    5402 controller.go:555] controller loop complete\nI0623 09:32:21.268058    5402 controller.go:187] starting controller iteration\nI0623 09:32:21.268103    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:32:21.268468    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:32:21.268661    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:32:21.269346    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:32:21.317911    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:32:21.318565    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:32:21.318819    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:32:21.325946    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:32:21.325972    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:32:21.326064    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:32:21.326202    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:32:21.326229    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:32:21.392336    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:32:21.392457    5402 controller.go:555] controller loop complete\nI0623 09:32:31.398610    5402 controller.go:187] starting controller iteration\nI0623 09:32:31.398676    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:32:31.399149    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:32:31.399390    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:32:31.401132    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:32:31.524103    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:32:31.524318    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:32:31.524352    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:32:31.529237    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:32:31.531306    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:32:31.532486    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:32:31.534078    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:32:31.535785    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:32:31.622038    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:32:31.622160    5402 controller.go:555] controller loop complete\nI0623 09:32:41.623412    5402 controller.go:187] starting controller iteration\nI0623 09:32:41.623456    5402 controller.go:264] Broadcasting leadership assertion with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:32:41.623992    5402 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > leadership_token:\"N0aINe5eBSwrjIWL9IbI7g\" healthy:<id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" > > \nI0623 09:32:41.624655    5402 controller.go:293] I am leader with token \"N0aINe5eBSwrjIWL9IbI7g\"\nI0623 09:32:41.625344    5402 controller.go:699] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002]\nI0623 09:32:41.665505    5402 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"10.0.16.6:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" > etcd_state:<cluster:<cluster_token:\"55_GJFLFeFHsPvHfe5Baqw\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\" client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3995\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:32:41.665662    5402 controller.go:301] etcd cluster members: map[10388124585224033663:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4002\"],\"ID\":\"10388124585224033663\"}]\nI0623 09:32:41.665684    5402 controller.go:635] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3997\" > \nI0623 09:32:41.666411    5402 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:32:41.666435    5402 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-events-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:32:41.666521    5402 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:32:41.666755    5402 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:32:41.666775    5402 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/events/control/etcd-cluster-created\"\nI0623 09:32:41.728660    5402 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:32:41.728781    5402 controller.go:555] controller loop complete\n==== END logs for container etcd-manager of pod kube-system/etcd-manager-events-master-us-west4-a-w636 ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-main-master-us-west4-a-w636 ====\netcd-manager\nI0623 09:22:40.499693    5439 volumes.go:97] Found project=\"k8s-boskos-gce-project-09\"\nI0623 09:22:40.500818    5439 volumes.go:110] Found zone=\"us-west4-a\"\nI0623 09:22:40.500879    5439 volumes.go:117] Found region=\"us-west4\"\nI0623 09:22:40.501870    5439 volumes.go:130] Found instanceName=\"master-us-west4-a-w636\"\nI0623 09:22:40.502838    5439 volumes.go:146] Found internalIP=\"10.0.16.6\"\nI0623 09:22:40.762839    5439 main.go:306] Mounting available etcd volumes matching tags [k8s-io-cluster-name=e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local k8s-io-etcd-main k8s-io-role-master=master]; nameTag=k8s-io-etcd-main\nI0623 09:22:40.762961    5439 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:22:40.959262    5439 mounter.go:304] Trying to mount master volume: \"https://www.googleapis.com/compute/beta/projects/k8s-boskos-gce-project-09/zones/us-west4-a/disks/a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\"\nI0623 09:22:51.098078    5439 mounter.go:318] Currently attached volumes: [0xc0000fc000]\nI0623 09:22:51.098205    5439 mounter.go:72] Master volume \"https://www.googleapis.com/compute/beta/projects/k8s-boskos-gce-project-09/zones/us-west4-a/disks/a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached at \"/dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\"\nI0623 09:22:51.098281    5439 mounter.go:86] Doing safe-format-and-mount of /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local to /mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:22:51.098335    5439 mounter.go:125] Found volume \"https://www.googleapis.com/compute/beta/projects/k8s-boskos-gce-project-09/zones/us-west4-a/disks/a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" mounted at device \"/dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\"\nI0623 09:22:51.099215    5439 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\"\nI0623 09:22:51.099459    5439 mounter.go:176] Mounting device \"/dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" on \"/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\"\nI0623 09:22:51.099506    5439 mount_linux.go:487] Attempting to determine if disk \"/dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local])\nI0623 09:22:51.099573    5439 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local]\nI0623 09:22:51.107240    5439 mount_linux.go:490] Output: \"\"\nI0623 09:22:51.107271    5439 mount_linux.go:449] Disk \"/dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local]\nI0623 09:22:51.107289    5439 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local]\nI0623 09:22:51.215261    5439 mount_linux.go:459] Disk successfully formatted (mkfs): ext4 - /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local /mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:22:51.215297    5439 mount_linux.go:477] Attempting to mount disk /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local in ext4 format at /mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:22:51.215308    5439 nsenter.go:80] nsenter mount /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local /mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local ext4 [defaults]\nI0623 09:22:51.215343    5439 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local --scope -- /bin/mount -t ext4 -o defaults /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local /mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local]\nI0623 09:22:51.239462    5439 nsenter.go:84] Output of mounting /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local to /mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local: Running scope as unit: run-r5485e3ef9e064f96baffe8095344c091.scope\nI0623 09:22:51.239505    5439 mount_linux.go:487] Attempting to determine if disk \"/dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local])\nI0623 09:22:51.239540    5439 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local]\nI0623 09:22:51.256708    5439 mount_linux.go:490] Output: \"DEVNAME=/dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\\nTYPE=ext4\\n\"\nI0623 09:22:51.256880    5439 resizefs_linux.go:56] ResizeFS.Resize - Expanding mounted volume /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:22:51.256904    5439 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local]\nI0623 09:22:51.260777    5439 resizefs_linux.go:71] Device /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local resized successfully\nI0623 09:22:51.279391    5439 mount_linux.go:222] Detected OS with systemd\nI0623 09:22:51.282027    5439 mounter.go:224] mounting inside container: /rootfs/dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local -> /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:22:51.282057    5439 mount_linux.go:183] Mounting cmd (systemd-run) with arguments (--description=Kubernetes transient mount for /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local --scope -- mount  /rootfs/dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local)\nI0623 09:22:51.298693    5439 mounter.go:94] mounted master volume \"https://www.googleapis.com/compute/beta/projects/k8s-boskos-gce-project-09/zones/us-west4-a/disks/a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" on /mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:22:51.298730    5439 main.go:321] discovered IP address: 10.0.16.6\nI0623 09:22:51.298745    5439 main.go:326] Setting data dir to /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:22:51.432945    5439 certs.go:211] generating certificate for \"etcd-manager-server-etcd-a\"\nI0623 09:22:51.533595    5439 certs.go:211] generating certificate for \"etcd-manager-client-etcd-a\"\nI0623 09:22:51.536838    5439 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-a\"\nI0623 09:22:51.538677    5439 main.go:474] peerClientIPs: [10.0.16.6]\nI0623 09:22:51.892975    5439 certs.go:211] generating certificate for \"etcd-manager-etcd-a\"\nI0623 09:22:51.895416    5439 server.go:105] GRPC server listening on \"10.0.16.6:3996\"\nI0623 09:22:51.895575    5439 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:22:52.083101    5439 volumes.go:234] volume \"a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:22:52.637611    5439 peers.go:116] found new candidate peer from discovery: etcd-a [{10.0.16.6 0}]\nI0623 09:22:52.637785    5439 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:22:52.638065    5439 peers.go:296] connecting to peer \"etcd-a\" with TLS policy, servername=\"etcd-manager-server-etcd-a\"\nI0623 09:22:53.895555    5439 controller.go:187] starting controller iteration\nI0623 09:22:53.896229    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:22:53.896925    5439 commands.go:41] refreshing commands\nI0623 09:22:53.979313    5439 vfs.go:120] listed commands in gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control: 0 commands\nI0623 09:22:53.979425    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-spec\"\nI0623 09:23:04.074264    5439 controller.go:187] starting controller iteration\nI0623 09:23:04.074422    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:04.075203    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:23:04.075653    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:04.076467    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > }\nI0623 09:23:04.076794    5439 controller.go:301] etcd cluster members: map[]\nI0623 09:23:04.076810    5439 controller.go:635] sending member map to all peers: \nI0623 09:23:04.077217    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:04.077240    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:23:04.145098    5439 controller.go:357] detected that there is no existing cluster\nI0623 09:23:04.145157    5439 commands.go:41] refreshing commands\nI0623 09:23:04.203377    5439 vfs.go:120] listed commands in gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control: 0 commands\nI0623 09:23:04.203497    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-spec\"\nI0623 09:23:04.298141    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:23:04.298781    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:04.298817    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:04.298915    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:04.299135    5439 newcluster.go:132] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > }]\nI0623 09:23:04.300363    5439 newcluster.go:149] JoinClusterResponse: \nI0623 09:23:04.302367    5439 etcdserver.go:560] starting etcd with state new_cluster:true cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" quarantined:true \nI0623 09:23:04.302436    5439 etcdserver.go:569] starting etcd with datadir /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw\nI0623 09:23:04.303307    5439 pki.go:58] adding peerClientIPs [10.0.16.6]\nI0623 09:23:04.303417    5439 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local] IPs:[10.0.16.6 127.0.0.1 ::1]} Usages:[2 1]}\nI0623 09:23:04.469935    5439 certs.go:211] generating certificate for \"etcd-a\"\nI0623 09:23:04.473562    5439 pki.go:108] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local] IPs:[127.0.0.1 ::1]} Usages:[1 2]}\nI0623 09:23:04.581842    5439 certs.go:211] generating certificate for \"etcd-a\"\nI0623 09:23:04.986334    5439 certs.go:211] generating certificate for \"etcd-a\"\nI0623 09:23:04.989017    5439 etcdprocess.go:210] executing command /opt/etcd-v3.5.4-linux-amd64/etcd [/opt/etcd-v3.5.4-linux-amd64/etcd]\nI0623 09:23:04.989526    5439 etcdprocess.go:315] started etcd with datadir /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw; pid=5857\nI0623 09:23:04.990222    5439 newcluster.go:167] JoinClusterResponse: \nI0623 09:23:04.990368    5439 gsfs.go:184] Writing file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-spec\"\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.021Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_ADVERTISE_CLIENT_URLS\",\"variable-value\":\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.021Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_CERT_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/server.crt\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.021Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_CLIENT_CERT_AUTH\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.021Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_DATA_DIR\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.021Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_ENABLE_V2\",\"variable-value\":\"false\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.021Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_EXPERIMENTAL_INITIAL_CORRUPT_CHECK\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.021Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_ADVERTISE_PEER_URLS\",\"variable-value\":\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.021Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_CLUSTER\",\"variable-value\":\"etcd-a=https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.021Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_CLUSTER_STATE\",\"variable-value\":\"new\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.021Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_CLUSTER_TOKEN\",\"variable-value\":\"9eId4zZifwcUb9IppvvUnw\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.021Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_KEY_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/server.key\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LISTEN_CLIENT_URLS\",\"variable-value\":\"https://0.0.0.0:3994\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LISTEN_PEER_URLS\",\"variable-value\":\"https://0.0.0.0:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LOG_OUTPUTS\",\"variable-value\":\"stdout\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LOGGER\",\"variable-value\":\"zap\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_NAME\",\"variable-value\":\"etcd-a\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_CERT_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/me.crt\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_CLIENT_CERT_AUTH\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_KEY_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/me.key\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_TRUSTED_CA_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/ca.crt\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_STRICT_RECONFIG_CHECK\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_TRUSTED_CA_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/ca.crt\"}\n{\"level\":\"warn\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCD_LISTEN_METRICS_URLS=\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"etcdmain/etcd.go:73\",\"msg\":\"Running: \",\"args\":[\"/opt/etcd-v3.5.4-linux-amd64/etcd\"]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"embed/etcd.go:131\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.022Z\",\"caller\":\"embed/etcd.go:479\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/me.crt, key = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/me.key, client-cert=, client-key=, trusted-ca = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.023Z\",\"caller\":\"embed/etcd.go:139\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3994\"]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.023Z\",\"caller\":\"embed/etcd.go:308\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.5.4\",\"git-sha\":\"08407ff76\",\"go-version\":\"go1.16.15\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-a=https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"9eId4zZifwcUb9IppvvUnw\",\"quota-size-bytes\":2147483648,\"pre-vote\":true,\"initial-corrupt-check\":true,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\",\"downgrade-check-interval\":\"5s\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.028Z\",\"caller\":\"etcdserver/backend.go:81\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw/member/snap/db\",\"took\":\"2.762255ms\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.029Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\",\"host\":\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\",\"resolved-addr\":\"10.0.16.6:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.029Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\",\"host\":\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\",\"resolved-addr\":\"10.0.16.6:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.042Z\",\"caller\":\"etcdserver/raft.go:448\",\"msg\":\"starting local member\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"cluster-id\":\"b93c3cd451affc9e\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.042Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.042Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.042Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"newRaft 9fd9903ca1b0c86f [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.042Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.042Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f switched to configuration voters=(11518396112061909103)\"}\n{\"level\":\"warn\",\"ts\":\"2022-06-23T09:23:05.045Z\",\"caller\":\"auth/store.go:1220\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.047Z\",\"caller\":\"mvcc/kvstore.go:415\",\"msg\":\"kvstore restored\",\"current-rev\":1}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.049Z\",\"caller\":\"etcdserver/quota.go:94\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.051Z\",\"caller\":\"etcdserver/server.go:851\",\"msg\":\"starting etcd server\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"local-server-version\":\"3.5.4\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.056Z\",\"caller\":\"embed/etcd.go:688\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/server.crt, key = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/server.key, client-cert=, client-key=, trusted-ca = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.056Z\",\"caller\":\"embed/etcd.go:581\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.057Z\",\"caller\":\"embed/etcd.go:553\",\"msg\":\"cmux::serve\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.057Z\",\"caller\":\"embed/etcd.go:277\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.057Z\",\"caller\":\"etcdserver/server.go:736\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.058Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f switched to configuration voters=(11518396112061909103)\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.058Z\",\"caller\":\"membership/cluster.go:421\",\"msg\":\"added member\",\"cluster-id\":\"b93c3cd451affc9e\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"added-peer-id\":\"9fd9903ca1b0c86f\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"]}\nI0623 09:23:05.206773    5439 gsfs.go:184] Writing file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.343Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.343Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f became pre-candidate at term 1\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.343Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f received MsgPreVoteResp from 9fd9903ca1b0c86f at term 1\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.343Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.343Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f received MsgVoteResp from 9fd9903ca1b0c86f at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.343Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.343Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"raft.node: 9fd9903ca1b0c86f elected leader 9fd9903ca1b0c86f at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.344Z\",\"caller\":\"etcdserver/server.go:2507\",\"msg\":\"setting up initial cluster version using v2 API\",\"cluster-version\":\"3.5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.345Z\",\"caller\":\"membership/cluster.go:584\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"b93c3cd451affc9e\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"cluster-version\":\"3.5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.345Z\",\"caller\":\"api/capability.go:75\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.345Z\",\"caller\":\"etcdserver/server.go:2531\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.345Z\",\"caller\":\"etcdserver/server.go:2042\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994]}\",\"request-path\":\"/0/members/9fd9903ca1b0c86f/attributes\",\"cluster-id\":\"b93c3cd451affc9e\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.345Z\",\"caller\":\"embed/serve.go:98\",\"msg\":\"ready to serve client requests\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.346Z\",\"caller\":\"etcdmain/main.go:44\",\"msg\":\"notifying init daemon\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.346Z\",\"caller\":\"etcdmain/main.go:50\",\"msg\":\"successfully notified init daemon\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.347Z\",\"caller\":\"embed/serve.go:188\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3994\"}\nI0623 09:23:05.366161    5439 controller.go:187] starting controller iteration\nI0623 09:23:05.366200    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:05.366497    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:23:05.366655    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:05.367359    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994]\nI0623 09:23:05.389811    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" quarantined:true > }\nI0623 09:23:05.389995    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:23:05.390439    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:23:05.391008    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:05.391036    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:05.391119    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:05.391520    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:05.391540    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:23:05.481651    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:05.483600    5439 backup.go:128] performing snapshot save to /tmp/745511728/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.494Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:211\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.496Z\",\"caller\":\"v3rpc/maintenance.go:125\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.497Z\",\"caller\":\"v3rpc/maintenance.go:165\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.497Z\",\"caller\":\"v3rpc/maintenance.go:174\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:05.499Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:219\",\"msg\":\"completed snapshot read; closing\"}\nI0623 09:23:05.500180    5439 gsfs.go:184] Writing file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/2022-06-23T09:23:05Z-000001/etcd.backup.gz\"\nI0623 09:23:05.663361    5439 gsfs.go:184] Writing file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/2022-06-23T09:23:05Z-000001/_etcd_backup.meta\"\nI0623 09:23:05.824373    5439 backup.go:153] backup complete: name:\"2022-06-23T09:23:05Z-000001\" \nI0623 09:23:05.825017    5439 controller.go:931] backup response: name:\"2022-06-23T09:23:05Z-000001\" \nI0623 09:23:05.825050    5439 controller.go:574] took backup: name:\"2022-06-23T09:23:05Z-000001\" \nI0623 09:23:05.882073    5439 vfs.go:118] listed backups in gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main: [2022-06-23T09:23:05Z-000001]\nI0623 09:23:05.882265    5439 cleanup.go:166] retaining backup \"2022-06-23T09:23:05Z-000001\"\nI0623 09:23:05.882394    5439 restore.go:98] Setting quarantined state to false\nI0623 09:23:05.882874    5439 etcdserver.go:397] Reconfigure request: header:<leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" cluster_name:\"etcd\" > \nI0623 09:23:05.882985    5439 etcdserver.go:440] Stopping etcd for reconfigure request: header:<leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" cluster_name:\"etcd\" > \nI0623 09:23:05.883023    5439 etcdserver.go:644] killing etcd with datadir /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw\nI0623 09:23:05.883128    5439 etcdprocess.go:136] Waiting for etcd to exit\nI0623 09:23:05.886132    5439 etcdprocess.go:331] etcd process exited (datadir /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw; pid=5857); exitCode=-1, exitErr=<nil>\nI0623 09:23:05.983550    5439 etcdprocess.go:136] Waiting for etcd to exit\nI0623 09:23:05.983731    5439 etcdprocess.go:141] Exited etcd: signal: killed\nI0623 09:23:05.984164    5439 etcdserver.go:447] updated cluster state: cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" \nI0623 09:23:05.984551    5439 etcdserver.go:452] Starting etcd version \"3.5.4\"\nI0623 09:23:05.984582    5439 etcdserver.go:560] starting etcd with state cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" \nI0623 09:23:05.984643    5439 etcdserver.go:569] starting etcd with datadir /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw\nI0623 09:23:05.984967    5439 pki.go:58] adding peerClientIPs [10.0.16.6]\nI0623 09:23:05.985003    5439 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local] IPs:[10.0.16.6 127.0.0.1 ::1]} Usages:[2 1]}\nI0623 09:23:05.985381    5439 certs.go:151] existing certificate not valid after 2024-06-22T09:23:04Z; will regenerate\nI0623 09:23:05.985446    5439 certs.go:211] generating certificate for \"etcd-a\"\nI0623 09:23:05.989367    5439 pki.go:108] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local] IPs:[127.0.0.1 ::1]} Usages:[1 2]}\nI0623 09:23:05.989597    5439 certs.go:151] existing certificate not valid after 2024-06-22T09:23:04Z; will regenerate\nI0623 09:23:05.989655    5439 certs.go:211] generating certificate for \"etcd-a\"\nI0623 09:23:06.357417    5439 certs.go:211] generating certificate for \"etcd-a\"\nI0623 09:23:06.359816    5439 etcdprocess.go:210] executing command /opt/etcd-v3.5.4-linux-amd64/etcd [/opt/etcd-v3.5.4-linux-amd64/etcd]\nI0623 09:23:06.360285    5439 etcdprocess.go:315] started etcd with datadir /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw; pid=5867\nI0623 09:23:06.360703    5439 restore.go:116] ReconfigureResponse: \nI0623 09:23:06.362054    5439 controller.go:187] starting controller iteration\nI0623 09:23:06.362122    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:06.362465    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:23:06.362851    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:06.363740    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_ADVERTISE_CLIENT_URLS\",\"variable-value\":\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_CERT_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/server.crt\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_CLIENT_CERT_AUTH\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_DATA_DIR\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_ENABLE_V2\",\"variable-value\":\"false\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_EXPERIMENTAL_INITIAL_CORRUPT_CHECK\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_ADVERTISE_PEER_URLS\",\"variable-value\":\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_CLUSTER\",\"variable-value\":\"etcd-a=https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_CLUSTER_STATE\",\"variable-value\":\"existing\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_INITIAL_CLUSTER_TOKEN\",\"variable-value\":\"9eId4zZifwcUb9IppvvUnw\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_KEY_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/server.key\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LISTEN_CLIENT_URLS\",\"variable-value\":\"https://0.0.0.0:4001\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LISTEN_PEER_URLS\",\"variable-value\":\"https://0.0.0.0:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LOG_OUTPUTS\",\"variable-value\":\"stdout\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_LOGGER\",\"variable-value\":\"zap\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_NAME\",\"variable-value\":\"etcd-a\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_CERT_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/me.crt\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_CLIENT_CERT_AUTH\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_KEY_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/me.key\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_PEER_TRUSTED_CA_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/ca.crt\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_STRICT_RECONFIG_CHECK\",\"variable-value\":\"true\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:113\",\"msg\":\"recognized and used environment variable\",\"variable-name\":\"ETCD_TRUSTED_CA_FILE\",\"variable-value\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/ca.crt\"}\n{\"level\":\"warn\",\"ts\":\"2022-06-23T09:23:06.379Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCD_LISTEN_METRICS_URLS=\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.380Z\",\"caller\":\"etcdmain/etcd.go:73\",\"msg\":\"Running: \",\"args\":[\"/opt/etcd-v3.5.4-linux-amd64/etcd\"]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.380Z\",\"caller\":\"etcdmain/etcd.go:116\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.380Z\",\"caller\":\"embed/etcd.go:131\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.380Z\",\"caller\":\"embed/etcd.go:479\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/me.crt, key = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/me.key, client-cert=, client-key=, trusted-ca = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.381Z\",\"caller\":\"embed/etcd.go:139\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4001\"]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.381Z\",\"caller\":\"embed/etcd.go:308\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.5.4\",\"git-sha\":\"08407ff76\",\"go-version\":\"go1.16.15\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":true,\"initial-corrupt-check\":true,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\",\"downgrade-check-interval\":\"5s\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.381Z\",\"caller\":\"etcdserver/backend.go:81\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/data/9eId4zZifwcUb9IppvvUnw/member/snap/db\",\"took\":\"188.43µs\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.382Z\",\"caller\":\"etcdserver/server.go:529\",\"msg\":\"No snapshot found. Recovering WAL from scratch!\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.383Z\",\"caller\":\"etcdserver/raft.go:483\",\"msg\":\"restarting local member\",\"cluster-id\":\"b93c3cd451affc9e\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.383Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.383Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.383Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"newRaft 9fd9903ca1b0c86f [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2022-06-23T09:23:06.385Z\",\"caller\":\"auth/store.go:1220\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.387Z\",\"caller\":\"mvcc/kvstore.go:415\",\"msg\":\"kvstore restored\",\"current-rev\":1}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.388Z\",\"caller\":\"etcdserver/quota.go:94\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.390Z\",\"caller\":\"etcdserver/corrupt.go:46\",\"msg\":\"starting initial corruption check\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.391Z\",\"caller\":\"etcdserver/corrupt.go:116\",\"msg\":\"initial corruption checking passed; no corruption\",\"local-member-id\":\"9fd9903ca1b0c86f\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.391Z\",\"caller\":\"etcdserver/server.go:851\",\"msg\":\"starting etcd server\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"local-server-version\":\"3.5.4\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.391Z\",\"caller\":\"etcdserver/server.go:752\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.391Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f switched to configuration voters=(11518396112061909103)\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.392Z\",\"caller\":\"membership/cluster.go:421\",\"msg\":\"added member\",\"cluster-id\":\"b93c3cd451affc9e\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"added-peer-id\":\"9fd9903ca1b0c86f\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.392Z\",\"caller\":\"membership/cluster.go:584\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"b93c3cd451affc9e\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"cluster-version\":\"3.5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.392Z\",\"caller\":\"api/capability.go:75\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.397Z\",\"caller\":\"embed/etcd.go:581\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.397Z\",\"caller\":\"embed/etcd.go:688\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/server.crt, key = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/server.key, client-cert=, client-key=, trusted-ca = /rootfs/mnt/master-us-west4-a-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local/pki/9eId4zZifwcUb9IppvvUnw/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.397Z\",\"caller\":\"embed/etcd.go:553\",\"msg\":\"cmux::serve\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:06.397Z\",\"caller\":\"embed/etcd.go:277\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.084Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.084Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f became pre-candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.084Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f received MsgPreVoteResp from 9fd9903ca1b0c86f at term 2\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.084Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.084Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f received MsgVoteResp from 9fd9903ca1b0c86f at term 3\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.084Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"9fd9903ca1b0c86f became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.084Z\",\"logger\":\"raft\",\"caller\":\"etcdserver/zap_raft.go:77\",\"msg\":\"raft.node: 9fd9903ca1b0c86f elected leader 9fd9903ca1b0c86f at term 3\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.087Z\",\"caller\":\"etcdserver/server.go:2042\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"9fd9903ca1b0c86f\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]}\",\"request-path\":\"/0/members/9fd9903ca1b0c86f/attributes\",\"cluster-id\":\"b93c3cd451affc9e\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.087Z\",\"caller\":\"embed/serve.go:98\",\"msg\":\"ready to serve client requests\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.088Z\",\"caller\":\"etcdmain/main.go:44\",\"msg\":\"notifying init daemon\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.088Z\",\"caller\":\"etcdmain/main.go:50\",\"msg\":\"successfully notified init daemon\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:23:08.089Z\",\"caller\":\"embed/serve.go:188\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4001\"}\nI0623 09:23:08.118095    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:08.118323    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:23:08.118342    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:23:08.118592    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:08.118607    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:08.118662    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:08.118745    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:08.118756    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:23:08.177919    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:08.178315    5439 controller.go:555] controller loop complete\nI0623 09:23:18.180502    5439 controller.go:187] starting controller iteration\nI0623 09:23:18.180554    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:18.180907    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:23:18.181097    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:18.181725    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:23:18.224850    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:18.225017    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:23:18.225048    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:23:18.225542    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:18.225567    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:18.225689    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:18.225805    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:18.225819    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:23:18.289768    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:18.289879    5439 controller.go:555] controller loop complete\nI0623 09:23:28.291248    5439 controller.go:187] starting controller iteration\nI0623 09:23:28.291288    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:28.291916    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:23:28.292153    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:28.292901    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:23:28.314315    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:28.314448    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:23:28.314469    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:23:28.315076    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:28.315117    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:28.315355    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:28.315578    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:28.315594    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:23:28.400859    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:28.401083    5439 controller.go:555] controller loop complete\nI0623 09:23:38.402682    5439 controller.go:187] starting controller iteration\nI0623 09:23:38.402711    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:38.402953    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:23:38.403110    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:38.403585    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:23:38.420055    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:38.420154    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:23:38.420175    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:23:38.420624    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:38.420654    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:38.420722    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:38.420846    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:38.420861    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:23:38.515889    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:38.516092    5439 controller.go:555] controller loop complete\nI0623 09:23:48.517344    5439 controller.go:187] starting controller iteration\nI0623 09:23:48.517375    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:48.517786    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:23:48.518162    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:48.518854    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:23:48.536379    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:48.536686    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:23:48.536801    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:23:48.537130    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:48.537148    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:48.537220    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:48.537334    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:48.537349    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:23:48.604547    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:48.604653    5439 controller.go:555] controller loop complete\nI0623 09:23:52.679086    5439 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:23:52.860985    5439 volumes.go:234] volume \"a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:23:53.379979    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:53.380113    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:58.606344    5439 controller.go:187] starting controller iteration\nI0623 09:23:58.606388    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:58.607049    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:23:58.607241    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:23:58.608243    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:23:58.629922    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:23:58.630498    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:23:58.630618    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:23:58.631016    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:58.631094    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:23:58.631191    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:23:58.631392    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:23:58.631414    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:23:58.692264    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:23:58.692389    5439 controller.go:555] controller loop complete\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:24:06.733Z\",\"caller\":\"traceutil/trace.go:171\",\"msg\":\"trace[1439281778] linearizableReadLoop\",\"detail\":\"{readStateIndex:531; appliedIndex:531; }\",\"duration\":\"125.108079ms\",\"start\":\"2022-06-23T09:24:06.608Z\",\"end\":\"2022-06-23T09:24:06.733Z\",\"steps\":[\"trace[1439281778] 'read index received'  (duration: 125.097399ms)\",\"trace[1439281778] 'applied index is now lower than readState.Index'  (duration: 9.152µs)\"],\"step_count\":2}\n{\"level\":\"warn\",\"ts\":\"2022-06-23T09:24:06.734Z\",\"caller\":\"etcdserver/util.go:166\",\"msg\":\"apply request took too long\",\"took\":\"126.844847ms\",\"expected-duration\":\"100ms\",\"prefix\":\"read-only range \",\"request\":\"key:\\\"/registry/minions/master-us-west4-a-w636\\\" \",\"response\":\"range_response_count:1 size:4226\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:24:06.735Z\",\"caller\":\"traceutil/trace.go:171\",\"msg\":\"trace[391062914] range\",\"detail\":\"{range_begin:/registry/minions/master-us-west4-a-w636; range_end:; response_count:1; response_revision:516; }\",\"duration\":\"126.96692ms\",\"start\":\"2022-06-23T09:24:06.608Z\",\"end\":\"2022-06-23T09:24:06.734Z\",\"steps\":[\"trace[391062914] 'agreement among raft nodes before linearized reading'  (duration: 125.298079ms)\"],\"step_count\":1}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:24:06.736Z\",\"caller\":\"traceutil/trace.go:171\",\"msg\":\"trace[1306121530] transaction\",\"detail\":\"{read_only:false; response_revision:517; number_of_response:1; }\",\"duration\":\"114.543301ms\",\"start\":\"2022-06-23T09:24:06.621Z\",\"end\":\"2022-06-23T09:24:06.736Z\",\"steps\":[\"trace[1306121530] 'process raft request'  (duration: 111.854688ms)\"],\"step_count\":1}\nI0623 09:24:08.693691    5439 controller.go:187] starting controller iteration\nI0623 09:24:08.693723    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:08.694040    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:24:08.694177    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:08.695440    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:24:08.735623    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:08.736159    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:24:08.736221    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:24:08.736778    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:08.736822    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:08.736932    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:08.737108    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:08.737148    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:24:08.800960    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:08.801171    5439 controller.go:555] controller loop complete\nI0623 09:24:18.802439    5439 controller.go:187] starting controller iteration\nI0623 09:24:18.802473    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:18.802831    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:24:18.803070    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:18.804462    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:24:18.823999    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:18.824297    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:24:18.824333    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:24:18.824579    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:18.824598    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:18.824678    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:18.824826    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:18.824841    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:24:18.902997    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:18.903482    5439 controller.go:555] controller loop complete\n{\"level\":\"warn\",\"ts\":\"2022-06-23T09:24:27.789Z\",\"caller\":\"etcdserver/util.go:166\",\"msg\":\"apply request took too long\",\"took\":\"104.496313ms\",\"expected-duration\":\"100ms\",\"prefix\":\"read-only range \",\"request\":\"key:\\\"/registry/csinodes/nodes-us-west4-a-6v6c\\\" \",\"response\":\"range_response_count:0 size:5\"}\n{\"level\":\"info\",\"ts\":\"2022-06-23T09:24:27.807Z\",\"caller\":\"traceutil/trace.go:171\",\"msg\":\"trace[1658782308] range\",\"detail\":\"{range_begin:/registry/csinodes/nodes-us-west4-a-6v6c; range_end:; response_count:0; response_revision:664; }\",\"duration\":\"122.081054ms\",\"start\":\"2022-06-23T09:24:27.684Z\",\"end\":\"2022-06-23T09:24:27.807Z\",\"steps\":[\"trace[1658782308] 'agreement among raft nodes before linearized reading'  (duration: 104.451346ms)\"],\"step_count\":1}\nI0623 09:24:28.905098    5439 controller.go:187] starting controller iteration\nI0623 09:24:28.905146    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:28.905533    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:24:28.905799    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:28.906934    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:24:28.997046    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:28.997180    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:24:28.997203    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:24:28.997523    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:28.997542    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:28.997621    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:28.997744    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:28.997759    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:24:29.082783    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:29.083014    5439 controller.go:555] controller loop complete\nI0623 09:24:39.094432    5439 controller.go:187] starting controller iteration\nI0623 09:24:39.094481    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:39.094868    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:24:39.095043    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:39.095994    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:24:39.143840    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:39.144453    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:24:39.145310    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:24:39.146531    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:39.146554    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:39.146628    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:39.146753    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:39.146765    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:24:39.210926    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:39.211044    5439 controller.go:555] controller loop complete\nI0623 09:24:49.213120    5439 controller.go:187] starting controller iteration\nI0623 09:24:49.213166    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:49.213488    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:24:49.213677    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:49.214762    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:24:49.258085    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:49.258205    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:24:49.258227    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:24:49.258433    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:49.258450    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:49.258556    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:49.258687    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:49.258701    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:24:49.320209    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:49.320332    5439 controller.go:555] controller loop complete\nI0623 09:24:53.383274    5439 volumes.go:250] Listing GCE disks in k8s-boskos-gce-project-09/us-west4-a\nI0623 09:24:53.616111    5439 volumes.go:234] volume \"a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\" is attached to this instance at /dev/disk/by-id/google-a-etcd-main-e2e-pr13857-pull-kops-e2e-k8s-gce-k8s-local\nI0623 09:24:54.220788    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:54.220922    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:59.324277    5439 controller.go:187] starting controller iteration\nI0623 09:24:59.324322    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:59.324746    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:24:59.324989    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:24:59.325761    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:24:59.355141    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:24:59.355278    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:24:59.355302    5439 controller.go:635] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local\" addresses:\"10.0.16.6:3996\" > \nI0623 09:24:59.355639    5439 etcdserver.go:252] updating hosts: map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:59.355660    5439 hosts.go:84] hosts update: primary=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]], fallbacks=map[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:[10.0.16.6]], final=map[10.0.16.6:[etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local]]\nI0623 09:24:59.355742    5439 hosts.go:181] skipping update of unchanged /etc/hosts\nI0623 09:24:59.355862    5439 commands.go:38] not refreshing commands - TTL not hit\nI0623 09:24:59.355877    5439 gsfs.go:259] Reading file \"gs://k8s-boskos-gce-project-09-state-0e/e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local/backups/etcd/main/control/etcd-cluster-created\"\nI0623 09:24:59.424538    5439 controller.go:393] spec member_count:1 etcd_version:\"3.5.4\" \nI0623 09:24:59.424676    5439 controller.go:555] controller loop complete\nI0623 09:25:09.428091    5439 controller.go:187] starting controller iteration\nI0623 09:25:09.428146    5439 controller.go:264] Broadcasting leadership assertion with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:25:09.428447    5439 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > leadership_token:\"xQGVijvmSe0qWQHVnRqR6g\" healthy:<id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" > > \nI0623 09:25:09.428655    5439 controller.go:293] I am leader with token \"xQGVijvmSe0qWQHVnRqR6g\"\nI0623 09:25:09.429329    5439 controller.go:699] base client OK for etcd for client urls [https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001]\nI0623 09:25:09.467729    5439 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"10.0.16.6:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" > etcd_state:<cluster:<cluster_token:\"9eId4zZifwcUb9IppvvUnw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\" client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:3994\" tls_enabled:true > > etcd_version:\"3.5.4\" > }\nI0623 09:25:09.467843    5439 controller.go:301] etcd cluster members: map[11518396112061909103:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-pr13857.pull-kops-e2e-k8s-gce.k8s.local:4001\"],\"ID\":\"11518396112061909103\"}]\nI0623 09:25:09.467862    5439 controller.go:63