This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2022-06-23 13:04
Elapsed40m14s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 183 lines ...
Updating project ssh metadata...
.................................Updated [https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu].
.done.
WARNING: No host aliases were added to your SSH configs because you do not have any running instances. Try running this command again after running some instances.
I0623 13:05:34.090258    5928 up.go:44] Cleaning up any leaked resources from previous cluster
I0623 13:05:34.090540    5928 dumplogs.go:45] /logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kops toolbox dump --name e2e-e2e-kops-gce-stable.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh1254532297/key --ssh-user prow
W0623 13:05:34.275009    5928 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0623 13:05:34.275232    5928 down.go:48] /logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kops delete cluster --name e2e-e2e-kops-gce-stable.k8s.local --yes
I0623 13:05:34.296867    5977 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 13:05:34.296984    5977 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-gce-stable.k8s.local" not found
I0623 13:05:34.412263    5928 gcs.go:51] gsutil ls -b -p kube-gce-upg-1-4-1-5-upg-clu gs://kube-gce-upg-1-4-1-5-upg-clu-state-e8
I0623 13:05:36.158308    5928 gcs.go:70] gsutil mb -p kube-gce-upg-1-4-1-5-upg-clu gs://kube-gce-upg-1-4-1-5-upg-clu-state-e8
Creating gs://kube-gce-upg-1-4-1-5-upg-clu-state-e8/...
I0623 13:05:38.241858    5928 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/06/23 13:05:38 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0623 13:05:38.250176    5928 http.go:37] curl https://ip.jsb.workers.dev
I0623 13:05:38.342724    5928 up.go:159] /logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kops create cluster --name e2e-e2e-kops-gce-stable.k8s.local --cloud gce --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.25.0-alpha.1 --ssh-public-key /tmp/kops-ssh1254532297/key.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --channel=alpha --gce-service-account=default --admin-access 35.238.122.147/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-central1-a --master-size e2-standard-2 --project kube-gce-upg-1-4-1-5-upg-clu
I0623 13:05:38.365146    6267 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 13:05:38.365257    6267 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
I0623 13:05:38.393388    6267 create_cluster.go:862] Using SSH public key: /tmp/kops-ssh1254532297/key.pub
I0623 13:05:38.631232    6267 new_cluster.go:425] VMs will be configured to use specified Service Account: default
... skipping 375 lines ...
I0623 13:05:45.651460    6289 keypair.go:225] Issuing new certificate: "etcd-peers-ca-main"
W0623 13:05:45.656049    6289 vfs_castore.go:379] CA private key was not found
I0623 13:05:45.657434    6289 keypair.go:225] Issuing new certificate: "etcd-clients-ca"
I0623 13:05:45.751329    6289 keypair.go:225] Issuing new certificate: "kubernetes-ca"
I0623 13:05:45.852745    6289 keypair.go:225] Issuing new certificate: "service-account"
I0623 13:06:01.325120    6289 executor.go:111] Tasks: 42 done / 68 total; 20 can run
W0623 13:06:11.103712    6289 executor.go:139] error running task "ForwardingRule/api-e2e-e2e-kops-gce-stable-k8s-local" (9m50s remaining to succeed): error creating ForwardingRule "api-e2e-e2e-kops-gce-stable-k8s-local": googleapi: Error 400: The resource 'projects/kube-gce-upg-1-4-1-5-upg-clu/regions/us-central1/targetPools/api-e2e-e2e-kops-gce-stable-k8s-local' is not ready, resourceNotReady
I0623 13:06:11.103958    6289 executor.go:111] Tasks: 61 done / 68 total; 5 can run
I0623 13:06:17.865200    6289 executor.go:111] Tasks: 66 done / 68 total; 2 can run
I0623 13:06:25.630923    6289 executor.go:111] Tasks: 68 done / 68 total; 0 can run
I0623 13:06:25.684024    6289 update_cluster.go:326] Exporting kubeconfig for cluster
kOps has set your kubectl context to e2e-e2e-kops-gce-stable.k8s.local

... skipping 8 lines ...

I0623 13:06:35.958736    5928 up.go:243] /logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kops validate cluster --name e2e-e2e-kops-gce-stable.k8s.local --count 10 --wait 15m0s
I0623 13:06:35.981349    6308 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 13:06:35.981456    6308 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-e2e-kops-gce-stable.k8s.local

W0623 13:07:06.326127    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: i/o timeout
W0623 13:07:31.700505    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: connect: connection refused
W0623 13:07:41.703746    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: connect: connection refused
W0623 13:07:51.707120    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: connect: connection refused
W0623 13:08:01.710852    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: connect: connection refused
W0623 13:08:11.714068    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: connect: connection refused
W0623 13:08:21.722257    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: connect: connection refused
W0623 13:08:31.730425    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: connect: connection refused
W0623 13:08:41.734639    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: connect: connection refused
W0623 13:08:51.742181    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: connect: connection refused
W0623 13:09:01.746599    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: connect: connection refused
W0623 13:09:21.755458    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": net/http: TLS handshake timeout
W0623 13:09:31.762684    6308 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.202.140.103/api/v1/nodes": dial tcp 35.202.140.103:443: connect: connection refused
I0623 13:09:42.172158    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1

... skipping 6 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-g3vq	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-g3vq" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-gl7l	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-gl7l" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m" has not yet joined cluster
Pod	kube-system/kube-controller-manager-master-us-central1-a-llg0										system-cluster-critical pod "kube-controller-manager-master-us-central1-a-llg0" is pending

Validation Failed
W0623 13:09:42.819979    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:09:53.111039    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 7 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-g3vq	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-g3vq" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-gl7l	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-gl7l" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m" has not yet joined cluster
Pod	kube-system/kube-controller-manager-master-us-central1-a-llg0										system-cluster-critical pod "kube-controller-manager-master-us-central1-a-llg0" is not ready (kube-controller-manager)

Validation Failed
W0623 13:09:53.737784    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:10:04.025932    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 6 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/master-us-central1-a-llg0	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/master-us-central1-a-llg0" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-g3vq	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-g3vq" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-gl7l	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-gl7l" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m" has not yet joined cluster

Validation Failed
W0623 13:10:04.639282    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:10:14.923360    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 11 lines ...
Pod	kube-system/cloud-controller-manager-mwdgd												system-cluster-critical pod "cloud-controller-manager-mwdgd" is pending
Pod	kube-system/coredns-autoscaler-5d4dbc7b59-gn5kn												system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-gn5kn" is pending
Pod	kube-system/coredns-dd657c749-ns2h8													system-cluster-critical pod "coredns-dd657c749-ns2h8" is pending
Pod	kube-system/dns-controller-78bc9bdd66-rxxpt												system-cluster-critical pod "dns-controller-78bc9bdd66-rxxpt" is pending
Pod	kube-system/kops-controller-b6qx6													system-cluster-critical pod "kops-controller-b6qx6" is pending

Validation Failed
W0623 13:10:15.443869    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:10:25.692433    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 9 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m" has not yet joined cluster
Pod	kube-system/cloud-controller-manager-mwdgd												system-cluster-critical pod "cloud-controller-manager-mwdgd" is pending
Pod	kube-system/coredns-autoscaler-5d4dbc7b59-gn5kn												system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-gn5kn" is pending
Pod	kube-system/coredns-dd657c749-ns2h8													system-cluster-critical pod "coredns-dd657c749-ns2h8" is pending

Validation Failed
W0623 13:10:26.341142    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:10:36.685401    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 9 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-5d4dbc7b59-gn5kn												system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-gn5kn" is pending
Pod	kube-system/coredns-dd657c749-ns2h8													system-cluster-critical pod "coredns-dd657c749-ns2h8" is pending
Pod	kube-system/kube-proxy-master-us-central1-a-llg0											system-node-critical pod "kube-proxy-master-us-central1-a-llg0" is pending

Validation Failed
W0623 13:10:37.320566    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:10:47.788455    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 10 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m" has not yet joined cluster
Node	master-us-central1-a-llg0														master "master-us-central1-a-llg0" is missing kube-apiserver pod
Pod	kube-system/coredns-autoscaler-5d4dbc7b59-gn5kn												system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-gn5kn" is pending
Pod	kube-system/coredns-dd657c749-ns2h8													system-cluster-critical pod "coredns-dd657c749-ns2h8" is pending
Pod	kube-system/metadata-proxy-v0.12-m6h96													system-node-critical pod "metadata-proxy-v0.12-m6h96" is pending

Validation Failed
W0623 13:10:48.426950    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:10:58.708129    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 9 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-hmlq" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m	machine "https://www.googleapis.com/compute/v1/projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/nodes-us-central1-a-pp7m" has not yet joined cluster
Node	master-us-central1-a-llg0														master "master-us-central1-a-llg0" is missing kube-apiserver pod
Pod	kube-system/coredns-autoscaler-5d4dbc7b59-gn5kn												system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-gn5kn" is pending
Pod	kube-system/coredns-dd657c749-ns2h8													system-cluster-critical pod "coredns-dd657c749-ns2h8" is pending

Validation Failed
W0623 13:10:59.412259    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:11:09.767549    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 11 lines ...
Node	nodes-us-central1-a-g3vq														node "nodes-us-central1-a-g3vq" of role "node" is not ready
Node	nodes-us-central1-a-hmlq														node "nodes-us-central1-a-hmlq" of role "node" is not ready
Pod	kube-system/coredns-autoscaler-5d4dbc7b59-gn5kn												system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-gn5kn" is pending
Pod	kube-system/coredns-dd657c749-ns2h8													system-cluster-critical pod "coredns-dd657c749-ns2h8" is pending
Pod	kube-system/metadata-proxy-v0.12-2xk8x													system-node-critical pod "metadata-proxy-v0.12-2xk8x" is pending

Validation Failed
W0623 13:11:10.405706    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:11:20.798353    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 16 lines ...
Pod	kube-system/coredns-dd657c749-ns2h8		system-cluster-critical pod "coredns-dd657c749-ns2h8" is pending
Pod	kube-system/metadata-proxy-v0.12-2nnxv		system-node-critical pod "metadata-proxy-v0.12-2nnxv" is pending
Pod	kube-system/metadata-proxy-v0.12-2xk8x		system-node-critical pod "metadata-proxy-v0.12-2xk8x" is pending
Pod	kube-system/metadata-proxy-v0.12-gd8sn		system-node-critical pod "metadata-proxy-v0.12-gd8sn" is pending
Pod	kube-system/metadata-proxy-v0.12-wg9l8		system-node-critical pod "metadata-proxy-v0.12-wg9l8" is pending

Validation Failed
W0623 13:11:21.413278    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:11:31.720007    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 11 lines ...
Node	nodes-us-central1-a-gl7l			node "nodes-us-central1-a-gl7l" of role "node" is not ready
Node	nodes-us-central1-a-pp7m			node "nodes-us-central1-a-pp7m" of role "node" is not ready
Pod	kube-system/coredns-autoscaler-5d4dbc7b59-gn5kn	system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-gn5kn" is pending
Pod	kube-system/coredns-dd657c749-ns2h8		system-cluster-critical pod "coredns-dd657c749-ns2h8" is pending
Pod	kube-system/metadata-proxy-v0.12-2nnxv		system-node-critical pod "metadata-proxy-v0.12-2nnxv" is pending

Validation Failed
W0623 13:11:32.283188    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:11:42.621739    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 23 lines ...
nodes-us-central1-a-pp7m	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-nodes-us-central1-a-g3vq	system-node-critical pod "kube-proxy-nodes-us-central1-a-g3vq" is pending

Validation Failed
W0623 13:11:54.177613    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:12:04.546494    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 8 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-nodes-us-central1-a-gl7l	system-node-critical pod "kube-proxy-nodes-us-central1-a-gl7l" is pending
Pod	kube-system/kube-proxy-nodes-us-central1-a-pp7m	system-node-critical pod "kube-proxy-nodes-us-central1-a-pp7m" is pending

Validation Failed
W0623 13:12:05.141042    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:12:15.448459    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 7 lines ...
nodes-us-central1-a-pp7m	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-nodes-us-central1-a-hmlq	system-node-critical pod "kube-proxy-nodes-us-central1-a-hmlq" is pending

Validation Failed
W0623 13:12:16.035890    6308 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 13:12:26.317744    6308 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 183 lines ...
===================================
Random Seed: 1655990065 - Will randomize all specs
Will run 7042 specs

Running in parallel across 25 nodes

Jun 23 13:14:40.920: INFO: lookupDiskImageSources: gcloud error with [[]string{"instance-groups", "list-instances", "", "--format=get(instance)"}]; err:exit status 1
Jun 23 13:14:40.920: INFO:  > ERROR: (gcloud.compute.instance-groups.list-instances) could not parse resource []
Jun 23 13:14:40.920: INFO:  > 
Jun 23 13:14:40.920: INFO: Cluster image sources lookup failed: exit status 1

Jun 23 13:14:40.920: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:14:40.922: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun 23 13:14:40.938: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun 23 13:14:40.968: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jun 23 13:14:40.968: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
... skipping 552 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 209 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver hostPath doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 8 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Only supported for providers [vsphere] (not gce)

      test/e2e/storage/drivers/in_tree.go:1439
------------------------------
... skipping 222 lines ...
      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:14:41.675: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 200 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:14:42.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2687" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":1,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:14:42.306: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 85 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:14:43.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8457" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:14:43.161: INFO: Only supported for providers [vsphere] (not gce)
... skipping 92 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: creating secret secrets-7752/secret-test-aefdce16-dcd1-4ef9-b3c9-7882b8a73b34
STEP: Creating a pod to test consume secrets
Jun 23 13:14:41.284: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a477237-74a9-4da1-96d0-ea74302e893b" in namespace "secrets-7752" to be "Succeeded or Failed"
Jun 23 13:14:41.318: INFO: Pod "pod-configmaps-6a477237-74a9-4da1-96d0-ea74302e893b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.34046ms
Jun 23 13:14:43.330: INFO: Pod "pod-configmaps-6a477237-74a9-4da1-96d0-ea74302e893b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046018441s
Jun 23 13:14:45.322: INFO: Pod "pod-configmaps-6a477237-74a9-4da1-96d0-ea74302e893b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038499817s
Jun 23 13:14:47.322: INFO: Pod "pod-configmaps-6a477237-74a9-4da1-96d0-ea74302e893b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037941535s
STEP: Saw pod success
Jun 23 13:14:47.322: INFO: Pod "pod-configmaps-6a477237-74a9-4da1-96d0-ea74302e893b" satisfied condition "Succeeded or Failed"
Jun 23 13:14:47.325: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-configmaps-6a477237-74a9-4da1-96d0-ea74302e893b container env-test: <nil>
STEP: delete the pod
Jun 23 13:14:47.754: INFO: Waiting for pod pod-configmaps-6a477237-74a9-4da1-96d0-ea74302e893b to disappear
Jun 23 13:14:47.758: INFO: Pod pod-configmaps-6a477237-74a9-4da1-96d0-ea74302e893b no longer exists
[AfterEach] [sig-node] Secrets
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.678 seconds]
[sig-node] Secrets
test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 23 lines ...
  test/e2e/common/node/runtime.go:43
    on terminated container
    test/e2e/common/node/runtime.go:136
      should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:14:48.898: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jun 23 13:14:41.703: INFO: Waiting up to 5m0s for pod "downward-api-347f9438-a836-4dd8-b0d9-cfed593c3cb3" in namespace "downward-api-1753" to be "Succeeded or Failed"
Jun 23 13:14:41.720: INFO: Pod "downward-api-347f9438-a836-4dd8-b0d9-cfed593c3cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.507875ms
Jun 23 13:14:43.726: INFO: Pod "downward-api-347f9438-a836-4dd8-b0d9-cfed593c3cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022964708s
Jun 23 13:14:45.724: INFO: Pod "downward-api-347f9438-a836-4dd8-b0d9-cfed593c3cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021052371s
Jun 23 13:14:47.724: INFO: Pod "downward-api-347f9438-a836-4dd8-b0d9-cfed593c3cb3": Phase="Running", Reason="", readiness=true. Elapsed: 6.021685668s
Jun 23 13:14:49.727: INFO: Pod "downward-api-347f9438-a836-4dd8-b0d9-cfed593c3cb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.023881326s
STEP: Saw pod success
Jun 23 13:14:49.727: INFO: Pod "downward-api-347f9438-a836-4dd8-b0d9-cfed593c3cb3" satisfied condition "Succeeded or Failed"
Jun 23 13:14:49.730: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod downward-api-347f9438-a836-4dd8-b0d9-cfed593c3cb3 container dapi-container: <nil>
STEP: delete the pod
Jun 23 13:14:49.899: INFO: Waiting for pod downward-api-347f9438-a836-4dd8-b0d9-cfed593c3cb3 to disappear
Jun 23 13:14:49.902: INFO: Pod downward-api-347f9438-a836-4dd8-b0d9-cfed593c3cb3 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.714 seconds]
[sig-node] Downward API
test/e2e/common/node/framework.go:23
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-3b436ec9-2f3e-4819-9a6a-f545f4eaa880
STEP: Creating a pod to test consume configMaps
Jun 23 13:14:41.927: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-148a145e-d790-4535-902e-6bec9220d1f1" in namespace "projected-8824" to be "Succeeded or Failed"
Jun 23 13:14:41.950: INFO: Pod "pod-projected-configmaps-148a145e-d790-4535-902e-6bec9220d1f1": Phase="Pending", Reason="", readiness=false. Elapsed: 23.026243ms
Jun 23 13:14:43.955: INFO: Pod "pod-projected-configmaps-148a145e-d790-4535-902e-6bec9220d1f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028003464s
Jun 23 13:14:45.954: INFO: Pod "pod-projected-configmaps-148a145e-d790-4535-902e-6bec9220d1f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027170744s
Jun 23 13:14:47.954: INFO: Pod "pod-projected-configmaps-148a145e-d790-4535-902e-6bec9220d1f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027202496s
Jun 23 13:14:49.957: INFO: Pod "pod-projected-configmaps-148a145e-d790-4535-902e-6bec9220d1f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.030348041s
STEP: Saw pod success
Jun 23 13:14:49.957: INFO: Pod "pod-projected-configmaps-148a145e-d790-4535-902e-6bec9220d1f1" satisfied condition "Succeeded or Failed"
Jun 23 13:14:49.961: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-projected-configmaps-148a145e-d790-4535-902e-6bec9220d1f1 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 13:14:50.335: INFO: Waiting for pod pod-projected-configmaps-148a145e-d790-4535-902e-6bec9220d1f1 to disappear
Jun 23 13:14:50.345: INFO: Pod pod-projected-configmaps-148a145e-d790-4535-902e-6bec9220d1f1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.576 seconds]
[sig-storage] Projected configMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:14:50.372: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/framework/framework.go:187

... skipping 90 lines ...
• [SLOW TEST:19.203 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:00.499: INFO: Only supported for providers [aws] (not gce)
... skipping 26 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 13:14:48.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21" in namespace "projected-8854" to be "Succeeded or Failed"
Jun 23 13:14:48.987: INFO: Pod "downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.926563ms
Jun 23 13:14:50.992: INFO: Pod "downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011639343s
Jun 23 13:14:53.006: INFO: Pod "downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025203039s
Jun 23 13:14:54.994: INFO: Pod "downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013611883s
Jun 23 13:14:56.992: INFO: Pod "downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011388162s
Jun 23 13:14:58.991: INFO: Pod "downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21": Phase="Pending", Reason="", readiness=false. Elapsed: 10.010655428s
Jun 23 13:15:00.992: INFO: Pod "downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21": Phase="Pending", Reason="", readiness=false. Elapsed: 12.011721526s
Jun 23 13:15:02.994: INFO: Pod "downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.013123929s
STEP: Saw pod success
Jun 23 13:15:02.994: INFO: Pod "downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21" satisfied condition "Succeeded or Failed"
Jun 23 13:15:03.014: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21 container client-container: <nil>
STEP: delete the pod
Jun 23 13:15:03.062: INFO: Waiting for pod downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21 to disappear
Jun 23 13:15:03.066: INFO: Pod downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:14.134 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:03.086: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 231 lines ...
• [SLOW TEST:16.356 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  test/e2e/common/storage/host_path.go:39
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  test/e2e/common/storage/host_path.go:50
STEP: Creating a pod to test hostPath mode
Jun 23 13:14:41.523: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2893" to be "Succeeded or Failed"
Jun 23 13:14:41.568: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 45.253652ms
Jun 23 13:14:43.574: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050536982s
Jun 23 13:14:45.572: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049204587s
Jun 23 13:14:47.572: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049086172s
Jun 23 13:14:49.572: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049254095s
Jun 23 13:14:51.572: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.049240936s
... skipping 3 lines ...
Jun 23 13:14:59.574: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.050863665s
Jun 23 13:15:01.582: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.059171174s
Jun 23 13:15:03.573: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.049880349s
Jun 23 13:15:05.576: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 24.052711575s
Jun 23 13:15:07.573: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.049578993s
STEP: Saw pod success
Jun 23 13:15:07.573: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jun 23 13:15:07.576: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Jun 23 13:15:07.604: INFO: Waiting for pod pod-host-path-test to disappear
Jun 23 13:15:07.607: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:26.458 seconds]
[sig-storage] HostPath
test/e2e/common/storage/framework.go:23
  should give a volume the correct mode [LinuxOnly] [NodeConformance]
  test/e2e/common/storage/host_path.go:50
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:07.643: INFO: Only supported for providers [openstack] (not gce)
... skipping 127 lines ...
• [SLOW TEST:19.936 seconds]
[sig-node] InitContainer [NodeConformance]
test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 49 lines ...
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:221
Jun 23 13:14:41.312: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 13:14:41.395: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7415" in namespace "provisioning-7415" to be "Succeeded or Failed"
Jun 23 13:14:41.428: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Pending", Reason="", readiness=false. Elapsed: 32.560813ms
Jun 23 13:14:43.433: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037522352s
Jun 23 13:14:45.432: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036573306s
Jun 23 13:14:47.432: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036419039s
Jun 23 13:14:49.432: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036598659s
STEP: Saw pod success
Jun 23 13:14:49.432: INFO: Pod "hostpath-symlink-prep-provisioning-7415" satisfied condition "Succeeded or Failed"
Jun 23 13:14:49.432: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7415" in namespace "provisioning-7415"
Jun 23 13:14:49.441: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7415" to be fully deleted
Jun 23 13:14:49.443: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-flrn
STEP: Creating a pod to test subpath
Jun 23 13:14:49.449: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-flrn" in namespace "provisioning-7415" to be "Succeeded or Failed"
Jun 23 13:14:49.453: INFO: Pod "pod-subpath-test-inlinevolume-flrn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028586ms
Jun 23 13:14:51.457: INFO: Pod "pod-subpath-test-inlinevolume-flrn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008411028s
Jun 23 13:14:53.458: INFO: Pod "pod-subpath-test-inlinevolume-flrn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008823381s
Jun 23 13:14:55.464: INFO: Pod "pod-subpath-test-inlinevolume-flrn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01515131s
Jun 23 13:14:57.457: INFO: Pod "pod-subpath-test-inlinevolume-flrn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007908101s
Jun 23 13:14:59.461: INFO: Pod "pod-subpath-test-inlinevolume-flrn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.012230894s
Jun 23 13:15:01.461: INFO: Pod "pod-subpath-test-inlinevolume-flrn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.011915667s
STEP: Saw pod success
Jun 23 13:15:01.461: INFO: Pod "pod-subpath-test-inlinevolume-flrn" satisfied condition "Succeeded or Failed"
Jun 23 13:15:01.465: INFO: Trying to get logs from node nodes-us-central1-a-g3vq pod pod-subpath-test-inlinevolume-flrn container test-container-subpath-inlinevolume-flrn: <nil>
STEP: delete the pod
Jun 23 13:15:01.724: INFO: Waiting for pod pod-subpath-test-inlinevolume-flrn to disappear
Jun 23 13:15:01.727: INFO: Pod pod-subpath-test-inlinevolume-flrn no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-flrn
Jun 23 13:15:01.727: INFO: Deleting pod "pod-subpath-test-inlinevolume-flrn" in namespace "provisioning-7415"
STEP: Deleting pod
Jun 23 13:15:01.730: INFO: Deleting pod "pod-subpath-test-inlinevolume-flrn" in namespace "provisioning-7415"
Jun 23 13:15:01.741: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7415" in namespace "provisioning-7415" to be "Succeeded or Failed"
Jun 23 13:15:01.745: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Pending", Reason="", readiness=false. Elapsed: 3.524286ms
Jun 23 13:15:03.750: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00861926s
Jun 23 13:15:05.771: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029299161s
Jun 23 13:15:07.749: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007744766s
Jun 23 13:15:09.751: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009220405s
Jun 23 13:15:11.750: INFO: Pod "hostpath-symlink-prep-provisioning-7415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.008421836s
STEP: Saw pod success
Jun 23 13:15:11.750: INFO: Pod "hostpath-symlink-prep-provisioning-7415" satisfied condition "Succeeded or Failed"
Jun 23 13:15:11.750: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7415" in namespace "provisioning-7415"
Jun 23 13:15:11.767: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7415" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187
Jun 23 13:15:11.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7415" for this suite.
... skipping 6 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:221
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:14:41.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  test/e2e/apps/job.go:271
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:187
Jun 23 13:15:13.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2076" for this suite.


• [SLOW TEST:32.188 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  test/e2e/apps/job.go:271
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:13.250: INFO: Only supported for providers [vsphere] (not gce)
... skipping 222 lines ...
test/e2e/kubectl/framework.go:23
  Guestbook application
  test/e2e/kubectl/kubectl.go:367
    should create and stop a working application  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":1,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:13.999: INFO: Only supported for providers [aws] (not gce)
... skipping 207 lines ...
• [SLOW TEST:32.226 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  test/e2e/apps/deployment.go:138
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":2,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:14.581: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":3,"skipped":44,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:15:10.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:15:14.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-825" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":4,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 11 lines ...
STEP: Destroying namespace "apply-5600" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  test/e2e/apimachinery/apply.go:59

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":5,"skipped":45,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:15.020: INFO: Only supported for providers [azure] (not gce)
... skipping 158 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:240
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 11 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:15:15.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-2729" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":2,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:15.618: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 45 lines ...
• [SLOW TEST:8.625 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:16.430: INFO: Only supported for providers [vsphere] (not gce)
... skipping 60 lines ...
Jun 23 13:15:16.810: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun 23 13:15:16.810: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-4205 describe pod agnhost-primary-xgxv4'
Jun 23 13:15:16.924: INFO: stderr: ""
Jun 23 13:15:16.924: INFO: stdout: "Name:         agnhost-primary-xgxv4\nNamespace:    kubectl-4205\nPriority:     0\nNode:         nodes-us-central1-a-g3vq/10.0.16.3\nStart Time:   Thu, 23 Jun 2022 13:15:06 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.1.42\nIPs:\n  IP:           100.96.1.42\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://141e85f39df108b9b9d75add51236518a3e3342f3316c6ca58b833894d5d0be8\n    Image:          registry.k8s.io/e2e-test-images/agnhost:2.39\n    Image ID:       registry.k8s.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 23 Jun 2022 13:15:08 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lhrgd (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-lhrgd:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  10s   default-scheduler  Successfully assigned kubectl-4205/agnhost-primary-xgxv4 to nodes-us-central1-a-g3vq\n"
Jun 23 13:15:16.924: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-4205 describe rc agnhost-primary'
Jun 23 13:15:17.032: INFO: stderr: ""
Jun 23 13:15:17.032: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-4205\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        registry.k8s.io/e2e-test-images/agnhost:2.39\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  11s   replication-controller  Created pod: agnhost-primary-xgxv4\n"
Jun 23 13:15:17.032: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-4205 describe service agnhost-primary'
Jun 23 13:15:17.158: INFO: stderr: ""
Jun 23 13:15:17.158: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-4205\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.67.162.39\nIPs:               100.67.162.39\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.1.42:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jun 23 13:15:17.163: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-4205 describe node master-us-central1-a-llg0'
Jun 23 13:15:17.306: INFO: stderr: ""
Jun 23 13:15:17.307: INFO: stdout: "Name:               master-us-central1-a-llg0\nRoles:              control-plane\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=e2-standard-2\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-central1\n                    failure-domain.beta.kubernetes.io/zone=us-central1-a\n                    kops.k8s.io/instancegroup=master-us-central1-a\n                    kops.k8s.io/kops-controller-pki=\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=master-us-central1-a-llg0\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/control-plane=\n                    node.kubernetes.io/exclude-from-external-load-balancers=\n                    node.kubernetes.io/instance-type=e2-standard-2\n                    topology.gke.io/zone=us-central1-a\n                    topology.kubernetes.io/region=us-central1\n                    topology.kubernetes.io/zone=us-central1-a\nAnnotations:        csi.volume.kubernetes.io/nodeid:\n                      {\"pd.csi.storage.gke.io\":\"projects/kube-gce-upg-1-4-1-5-upg-clu/zones/us-central1-a/instances/master-us-central1-a-llg0\"}\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 23 Jun 2022 13:09:38 +0000\nTaints:             node-role.kubernetes.io/control-plane:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  master-us-central1-a-llg0\n  AcquireTime:     <unset>\n  RenewTime:       Thu, 23 Jun 2022 13:15:14 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Thu, 23 Jun 2022 13:10:45 +0000   Thu, 23 Jun 2022 13:10:45 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Thu, 23 Jun 2022 13:11:09 +0000   Thu, 23 Jun 2022 13:09:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 23 Jun 2022 13:11:09 +0000   Thu, 23 Jun 2022 13:09:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 23 Jun 2022 13:11:09 +0000   Thu, 23 Jun 2022 13:09:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 23 Jun 2022 13:11:09 +0000   Thu, 23 Jun 2022 13:10:08 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.0.16.6\n  ExternalIP:  34.70.209.167\n  Hostname:    master-us-central1-a-llg0\nCapacity:\n  cpu:                2\n  ephemeral-storage:  48600704Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             8145396Ki\n  pods:               110\nAllocatable:\n  cpu:                2\n  ephemeral-storage:  44790408733\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             8042996Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 821f3f63f855204d85dac61c2cb8ab6d\n  System UUID:                821f3f63-f855-204d-85da-c61c2cb8ab6d\n  Boot ID:                    25a97878-7e01-4d1c-a0a4-441d5bac8a7c\n  Kernel Version:             5.11.0-1028-gcp\n  OS Image:                   Ubuntu 20.04.3 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.6.6\n  Kubelet Version:            v1.25.0-alpha.1\n  Kube-Proxy Version:         v1.25.0-alpha.1\nPodCIDR:                      100.96.0.0/24\nPodCIDRs:                     100.96.0.0/24\nProviderID:                   gce://kube-gce-upg-1-4-1-5-upg-clu/us-central1-a/master-us-central1-a-llg0\nNon-terminated Pods:          (12 in total)\n  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---\n  gce-pd-csi-driver           csi-gce-pd-controller-9f559494d-ck9c2                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s\n  gce-pd-csi-driver           csi-gce-pd-node-826hl                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s\n  kube-system                 cloud-controller-manager-mwdgd                       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m8s\n  kube-system                 dns-controller-78bc9bdd66-rxxpt                      50m (2%)      0 (0%)      50Mi (0%)        0 (0%)         5m8s\n  kube-system                 etcd-manager-events-master-us-central1-a-llg0        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m1s\n  kube-system                 etcd-manager-main-master-us-central1-a-llg0          200m (10%)    0 (0%)      100Mi (1%)       0 (0%)         5m5s\n  kube-system                 kops-controller-b6qx6                                50m (2%)      0 (0%)      50Mi (0%)        0 (0%)         5m8s\n  kube-system                 kube-apiserver-master-us-central1-a-llg0             150m (7%)     0 (0%)      0 (0%)           0 (0%)         4m18s\n  kube-system                 kube-controller-manager-master-us-central1-a-llg0    100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m35s\n  kube-system                 kube-proxy-master-us-central1-a-llg0                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m41s\n  kube-system                 kube-scheduler-master-us-central1-a-llg0             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m30s\n  kube-system                 metadata-proxy-v0.12-m6h96                           32m (1%)      32m (1%)    45Mi (0%)        45Mi (0%)      4m38s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests     Limits\n  --------           --------     ------\n  cpu                1082m (54%)  32m (1%)\n  memory             345Mi (4%)   45Mi (0%)\n  ephemeral-storage  0 (0%)       0 (0%)\n  hugepages-1Gi      0 (0%)       0 (0%)\n  hugepages-2Mi      0 (0%)       0 (0%)\nEvents:\n  Type    Reason                   Age                     From                   Message\n  ----    ------                   ----                    ----                   -------\n  Normal  Starting                 5m30s                   kube-proxy             \n  Normal  NodeAllocatableEnforced  6m43s                   kubelet                Updated Node Allocatable limit across pods\n  Normal  NodeHasSufficientMemory  6m42s (x8 over 6m44s)   kubelet                Node master-us-central1-a-llg0 status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    6m42s (x7 over 6m44s)   kubelet                Node master-us-central1-a-llg0 status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     6m42s (x7 over 6m44s)   kubelet                Node master-us-central1-a-llg0 status is now: NodeHasSufficientPID\n  Normal  RegisteredNode           5m9s                    node-controller        Node master-us-central1-a-llg0 event: Registered Node master-us-central1-a-llg0 in Controller\n  Normal  Synced                   4m39s                   cloud-node-controller  Node synced successfully\n  Normal  CIDRNotAvailable         3m56s (x10 over 4m38s)  cidrAllocator          Node master-us-central1-a-llg0 status is now: CIDRNotAvailable\n"
... skipping 11 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl describe
  test/e2e/kubectl/kubectl.go:1259
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 54 lines ...
• [SLOW TEST:37.251 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  test/e2e/network/dns.go:460
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:15:18.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:15:18.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2549" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":2,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:18.472: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
  test/e2e/storage/subpath.go:40
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating pod pod-subpath-test-secret-4j4p
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 13:14:41.603: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4j4p" in namespace "subpath-1033" to be "Succeeded or Failed"
Jun 23 13:14:41.621: INFO: Pod "pod-subpath-test-secret-4j4p": Phase="Pending", Reason="", readiness=false. Elapsed: 18.497126ms
Jun 23 13:14:43.628: INFO: Pod "pod-subpath-test-secret-4j4p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025151425s
Jun 23 13:14:45.626: INFO: Pod "pod-subpath-test-secret-4j4p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022817872s
Jun 23 13:14:47.626: INFO: Pod "pod-subpath-test-secret-4j4p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023695414s
Jun 23 13:14:49.629: INFO: Pod "pod-subpath-test-secret-4j4p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026524778s
Jun 23 13:14:51.626: INFO: Pod "pod-subpath-test-secret-4j4p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023384446s
... skipping 9 lines ...
Jun 23 13:15:11.626: INFO: Pod "pod-subpath-test-secret-4j4p": Phase="Running", Reason="", readiness=true. Elapsed: 30.022759938s
Jun 23 13:15:13.627: INFO: Pod "pod-subpath-test-secret-4j4p": Phase="Running", Reason="", readiness=true. Elapsed: 32.023897853s
Jun 23 13:15:15.626: INFO: Pod "pod-subpath-test-secret-4j4p": Phase="Running", Reason="", readiness=true. Elapsed: 34.022966992s
Jun 23 13:15:17.626: INFO: Pod "pod-subpath-test-secret-4j4p": Phase="Running", Reason="", readiness=true. Elapsed: 36.023515952s
Jun 23 13:15:19.634: INFO: Pod "pod-subpath-test-secret-4j4p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.031479994s
STEP: Saw pod success
Jun 23 13:15:19.634: INFO: Pod "pod-subpath-test-secret-4j4p" satisfied condition "Succeeded or Failed"
Jun 23 13:15:19.640: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-subpath-test-secret-4j4p container test-container-subpath-secret-4j4p: <nil>
STEP: delete the pod
Jun 23 13:15:19.705: INFO: Waiting for pod pod-subpath-test-secret-4j4p to disappear
Jun 23 13:15:19.715: INFO: Pod pod-subpath-test-secret-4j4p no longer exists
STEP: Deleting pod pod-subpath-test-secret-4j4p
Jun 23 13:15:19.715: INFO: Deleting pod "pod-subpath-test-secret-4j4p" in namespace "subpath-1033"
... skipping 50 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    should have a working scale subresource [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:21.354: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/framework/framework.go:187

... skipping 68 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-map-9dd4e0c7-5b5a-401c-8537-9f87f90237cc
STEP: Creating a pod to test consume configMaps
Jun 23 13:15:13.344: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3a0265d1-c57a-4c76-aa21-fc7cc0f79428" in namespace "projected-896" to be "Succeeded or Failed"
Jun 23 13:15:13.348: INFO: Pod "pod-projected-configmaps-3a0265d1-c57a-4c76-aa21-fc7cc0f79428": Phase="Pending", Reason="", readiness=false. Elapsed: 3.685317ms
Jun 23 13:15:15.356: INFO: Pod "pod-projected-configmaps-3a0265d1-c57a-4c76-aa21-fc7cc0f79428": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011197872s
Jun 23 13:15:17.352: INFO: Pod "pod-projected-configmaps-3a0265d1-c57a-4c76-aa21-fc7cc0f79428": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007782285s
Jun 23 13:15:19.354: INFO: Pod "pod-projected-configmaps-3a0265d1-c57a-4c76-aa21-fc7cc0f79428": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009498993s
Jun 23 13:15:21.352: INFO: Pod "pod-projected-configmaps-3a0265d1-c57a-4c76-aa21-fc7cc0f79428": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.007537286s
STEP: Saw pod success
Jun 23 13:15:21.352: INFO: Pod "pod-projected-configmaps-3a0265d1-c57a-4c76-aa21-fc7cc0f79428" satisfied condition "Succeeded or Failed"
Jun 23 13:15:21.358: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-projected-configmaps-3a0265d1-c57a-4c76-aa21-fc7cc0f79428 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 13:15:21.387: INFO: Waiting for pod pod-projected-configmaps-3a0265d1-c57a-4c76-aa21-fc7cc0f79428 to disappear
Jun 23 13:15:21.391: INFO: Pod pod-projected-configmaps-3a0265d1-c57a-4c76-aa21-fc7cc0f79428 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.108 seconds]
[sig-storage] Projected configMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:21.440: INFO: Only supported for providers [azure] (not gce)
... skipping 81 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 13:15:07.788: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8" in namespace "projected-5693" to be "Succeeded or Failed"
Jun 23 13:15:07.795: INFO: Pod "downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.362453ms
Jun 23 13:15:09.799: INFO: Pod "downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010913515s
Jun 23 13:15:11.800: INFO: Pod "downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012407449s
Jun 23 13:15:13.818: INFO: Pod "downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030124263s
Jun 23 13:15:15.816: INFO: Pod "downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02809632s
Jun 23 13:15:17.800: INFO: Pod "downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.012275213s
Jun 23 13:15:19.803: INFO: Pod "downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.015099879s
Jun 23 13:15:21.799: INFO: Pod "downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.01147751s
STEP: Saw pod success
Jun 23 13:15:21.800: INFO: Pod "downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8" satisfied condition "Succeeded or Failed"
Jun 23 13:15:21.805: INFO: Trying to get logs from node nodes-us-central1-a-g3vq pod downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8 container client-container: <nil>
STEP: delete the pod
Jun 23 13:15:21.822: INFO: Waiting for pod downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8 to disappear
Jun 23 13:15:21.826: INFO: Pod downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:14.089 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-node] Containers
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:15:11.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test override all
Jun 23 13:15:11.922: INFO: Waiting up to 5m0s for pod "client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679" in namespace "containers-7831" to be "Succeeded or Failed"
Jun 23 13:15:11.935: INFO: Pod "client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679": Phase="Pending", Reason="", readiness=false. Elapsed: 13.014486ms
Jun 23 13:15:13.940: INFO: Pod "client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018163997s
Jun 23 13:15:15.941: INFO: Pod "client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018510955s
Jun 23 13:15:17.945: INFO: Pod "client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023226687s
Jun 23 13:15:19.940: INFO: Pod "client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018182219s
Jun 23 13:15:21.943: INFO: Pod "client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020882593s
STEP: Saw pod success
Jun 23 13:15:21.943: INFO: Pod "client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679" satisfied condition "Succeeded or Failed"
Jun 23 13:15:21.947: INFO: Trying to get logs from node nodes-us-central1-a-g3vq pod client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 13:15:21.976: INFO: Waiting for pod client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679 to disappear
Jun 23 13:15:21.982: INFO: Pod client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679 no longer exists
[AfterEach] [sig-node] Containers
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:10.172 seconds]
[sig-node] Containers
test/e2e/common/node/framework.go:23
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:22.027: INFO: Only supported for providers [azure] (not gce)
... skipping 379 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:22.889: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 91 lines ...
• [SLOW TEST:7.200 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  Replicaset should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:11.149 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  test/e2e/apimachinery/resource_quota.go:482
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":3,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:25.808: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 209 lines ...
Jun 23 13:14:41.385: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-2981 create -f -'
Jun 23 13:14:41.930: INFO: stderr: ""
Jun 23 13:14:41.930: INFO: stdout: "pod/httpd created\n"
Jun 23 13:14:41.930: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 13:14:41.931: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-2981" to be "running and ready"
Jun 23 13:14:41.959: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.403871ms
Jun 23 13:14:41.959: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:14:43.964: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033436922s
Jun 23 13:14:43.964: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:14:45.963: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032905398s
Jun 23 13:14:45.964: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:14:47.963: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032560797s
Jun 23 13:14:47.963: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:14:49.964: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033394599s
Jun 23 13:14:49.964: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:14:51.963: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.032459327s
Jun 23 13:14:51.963: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:14:53.967: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.036325007s
Jun 23 13:14:53.967: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:14:55.963: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.032248426s
Jun 23 13:14:55.963: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:14:57.964: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.033448181s
Jun 23 13:14:57.964: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:14:59.964: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.033406617s
Jun 23 13:14:59.964: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:15:01.964: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.03297283s
Jun 23 13:15:01.964: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:15:03.963: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.032566508s
Jun 23 13:15:03.963: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:15:05.964: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.033642946s
Jun 23 13:15:05.964: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:15:07.963: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.032393113s
Jun 23 13:15:07.963: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:15:09.963: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.032003124s
Jun 23 13:15:09.963: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:15:11.971: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 30.04055774s
Jun 23 13:15:11.971: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 13:15:11.971: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should contain last line of the log
  test/e2e/kubectl/kubectl.go:651
STEP: executing a command with run
Jun 23 13:15:11.971: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-2981 run run-log-test --image=registry.k8s.io/e2e-test-images/busybox:1.29-2 --restart=OnFailure --pod-running-timeout=2m0s -- sh -c sleep 10; seq 100 | while read i; do echo $i; sleep 0.01; done; echo EOF'
Jun 23 13:15:12.063: INFO: stderr: ""
Jun 23 13:15:12.063: INFO: stdout: "pod/run-log-test created\n"
Jun 23 13:15:12.063: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [run-log-test]
Jun 23 13:15:12.063: INFO: Waiting up to 5m0s for pod "run-log-test" in namespace "kubectl-2981" to be "running and ready, or succeeded"
Jun 23 13:15:12.067: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.920324ms
Jun 23 13:15:12.067: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:15:14.072: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008832734s
Jun 23 13:15:14.072: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:15:16.072: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009458431s
Jun 23 13:15:16.072: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:15:18.071: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008005021s
Jun 23 13:15:18.071: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:15:20.076: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013074859s
Jun 23 13:15:20.076: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-central1-a-g3vq' to be 'Running' but was 'Pending'
Jun 23 13:15:22.071: INFO: Pod "run-log-test": Phase="Running", Reason="", readiness=true. Elapsed: 10.008161166s
Jun 23 13:15:22.071: INFO: Pod "run-log-test" satisfied condition "running and ready, or succeeded"
Jun 23 13:15:22.071: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [run-log-test]
Jun 23 13:15:22.071: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-2981 logs -f run-log-test'
Jun 23 13:15:25.968: INFO: stderr: ""
Jun 23 13:15:25.968: INFO: stdout: "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\nEOF\n"
... skipping 20 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:407
    should contain last line of the log
    test/e2e/kubectl/kubectl.go:651
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:26.326: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 67 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run with an image specified user ID
  test/e2e/common/node/security_context.go:153
Jun 23 13:15:15.492: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-1138" to be "Succeeded or Failed"
Jun 23 13:15:15.498: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 5.717177ms
Jun 23 13:15:17.511: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018846633s
Jun 23 13:15:19.502: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010471569s
Jun 23 13:15:21.511: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018913795s
Jun 23 13:15:23.504: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012378964s
Jun 23 13:15:25.503: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.010716587s
Jun 23 13:15:27.505: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.013573485s
Jun 23 13:15:27.506: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 13:15:27.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1138" for this suite.


... skipping 2 lines ...
test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  test/e2e/common/node/security_context.go:106
    should run with an image specified user ID
    test/e2e/common/node/security_context.go:153
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:27.551: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run with an explicit non-root user ID [LinuxOnly]
  test/e2e/common/node/security_context.go:131
Jun 23 13:15:15.686: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-3859" to be "Succeeded or Failed"
Jun 23 13:15:15.706: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 19.286256ms
Jun 23 13:15:17.710: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02413237s
Jun 23 13:15:19.720: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03330937s
Jun 23 13:15:21.713: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02670419s
Jun 23 13:15:23.737: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050975828s
Jun 23 13:15:25.711: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024751797s
Jun 23 13:15:27.728: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.041965361s
Jun 23 13:15:27.728: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 13:15:27.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3859" for this suite.


... skipping 2 lines ...
test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  test/e2e/common/node/security_context.go:106
    should run with an explicit non-root user ID [LinuxOnly]
    test/e2e/common/node/security_context.go:131
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":3,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:27.788: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  test/e2e/framework/framework.go:187

... skipping 92 lines ...
      test/e2e/storage/testsuites/ephemeral.go:315

      Driver emptydir doesn't support GenericEphemeralVolume -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":6,"skipped":68,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:27.907: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 163 lines ...
Jun 23 13:14:53.638: INFO: PersistentVolumeClaim pvc-x4w2k found but phase is Pending instead of Bound.
Jun 23 13:14:55.645: INFO: PersistentVolumeClaim pvc-x4w2k found and phase=Bound (6.024985176s)
Jun 23 13:14:55.645: INFO: Waiting up to 3m0s for PersistentVolume local-2j2bl to have phase Bound
Jun 23 13:14:55.648: INFO: PersistentVolume local-2j2bl found and phase=Bound (2.742858ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fvmp
STEP: Creating a pod to test subpath
Jun 23 13:14:55.664: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fvmp" in namespace "provisioning-1424" to be "Succeeded or Failed"
Jun 23 13:14:55.674: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.332027ms
Jun 23 13:14:57.682: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017878872s
Jun 23 13:14:59.683: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018522911s
Jun 23 13:15:01.679: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01495042s
Jun 23 13:15:03.682: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017410591s
Jun 23 13:15:05.685: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020846059s
... skipping 7 lines ...
Jun 23 13:15:21.680: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp": Phase="Pending", Reason="", readiness=false. Elapsed: 26.015947417s
Jun 23 13:15:23.683: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp": Phase="Pending", Reason="", readiness=false. Elapsed: 28.018855124s
Jun 23 13:15:25.680: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp": Phase="Pending", Reason="", readiness=false. Elapsed: 30.015702192s
Jun 23 13:15:27.680: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp": Phase="Pending", Reason="", readiness=false. Elapsed: 32.015720949s
Jun 23 13:15:29.682: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.017779793s
STEP: Saw pod success
Jun 23 13:15:29.682: INFO: Pod "pod-subpath-test-preprovisionedpv-fvmp" satisfied condition "Succeeded or Failed"
Jun 23 13:15:29.689: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-subpath-test-preprovisionedpv-fvmp container test-container-subpath-preprovisionedpv-fvmp: <nil>
STEP: delete the pod
Jun 23 13:15:29.728: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fvmp to disappear
Jun 23 13:15:29.732: INFO: Pod pod-subpath-test-preprovisionedpv-fvmp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fvmp
Jun 23 13:15:29.732: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fvmp" in namespace "provisioning-1424"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:367
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:29.967: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 48 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl run pod
  test/e2e/kubectl/kubectl.go:1686
    should create a pod from an image when restart is Never  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:32.410: INFO: Only supported for providers [azure] (not gce)
... skipping 81 lines ...
Jun 23 13:15:08.019: INFO: PersistentVolumeClaim pvc-bzk5b found but phase is Pending instead of Bound.
Jun 23 13:15:10.024: INFO: PersistentVolumeClaim pvc-bzk5b found and phase=Bound (2.009400415s)
Jun 23 13:15:10.024: INFO: Waiting up to 3m0s for PersistentVolume local-92rtd to have phase Bound
Jun 23 13:15:10.027: INFO: PersistentVolume local-92rtd found and phase=Bound (3.43905ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6k97
STEP: Creating a pod to test subpath
Jun 23 13:15:10.042: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6k97" in namespace "provisioning-2114" to be "Succeeded or Failed"
Jun 23 13:15:10.051: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Pending", Reason="", readiness=false. Elapsed: 9.174739ms
Jun 23 13:15:12.056: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014756013s
Jun 23 13:15:14.055: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012945019s
Jun 23 13:15:16.055: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013073194s
Jun 23 13:15:18.057: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015257611s
Jun 23 13:15:20.057: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015553333s
STEP: Saw pod success
Jun 23 13:15:20.057: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97" satisfied condition "Succeeded or Failed"
Jun 23 13:15:20.070: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-subpath-test-preprovisionedpv-6k97 container test-container-subpath-preprovisionedpv-6k97: <nil>
STEP: delete the pod
Jun 23 13:15:20.150: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6k97 to disappear
Jun 23 13:15:20.159: INFO: Pod pod-subpath-test-preprovisionedpv-6k97 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6k97
Jun 23 13:15:20.159: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6k97" in namespace "provisioning-2114"
STEP: Creating pod pod-subpath-test-preprovisionedpv-6k97
STEP: Creating a pod to test subpath
Jun 23 13:15:20.196: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6k97" in namespace "provisioning-2114" to be "Succeeded or Failed"
Jun 23 13:15:20.207: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Pending", Reason="", readiness=false. Elapsed: 10.819449ms
Jun 23 13:15:22.212: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01546179s
Jun 23 13:15:24.213: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016873892s
Jun 23 13:15:26.212: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015525168s
Jun 23 13:15:28.214: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017555036s
Jun 23 13:15:30.214: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Pending", Reason="", readiness=false. Elapsed: 10.017076128s
Jun 23 13:15:32.213: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.016851936s
STEP: Saw pod success
Jun 23 13:15:32.214: INFO: Pod "pod-subpath-test-preprovisionedpv-6k97" satisfied condition "Succeeded or Failed"
Jun 23 13:15:32.220: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-subpath-test-preprovisionedpv-6k97 container test-container-subpath-preprovisionedpv-6k97: <nil>
STEP: delete the pod
Jun 23 13:15:32.252: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6k97 to disappear
Jun 23 13:15:32.257: INFO: Pod pod-subpath-test-preprovisionedpv-6k97 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6k97
Jun 23 13:15:32.257: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6k97" in namespace "provisioning-2114"
... skipping 26 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      test/e2e/storage/testsuites/subpath.go:397
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:32.576: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
  test/e2e/framework/framework.go:187

... skipping 139 lines ...
&Pod{ObjectMeta:{test-deployment-577d99f66-c7xbv test-deployment-577d99f66- deployment-2684  83824347-4f83-406d-aae5-31b2a1874d2b 3979 0 2022-06-23 13:15:27 +0000 UTC <nil> <nil> map[pod-template-hash:577d99f66 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-577d99f66 7265743b-3893-4ad6-baec-a0e3fee1b0a2 0xc003614d87 0xc003614d88}] [] [{kube-controller-manager Update v1 2022-06-23 13:15:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7265743b-3893-4ad6-baec-a0e3fee1b0a2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 13:15:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.54\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-smvt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-smvt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-central1-a-g3vq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:15:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:15:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.16.3,PodIP:100.96.1.54,StartTime:2022-06-23 13:15:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-23 13:15:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://22191739a4c48e141270d08ae0a8d8728a31fa652b5482024915a441a4e1f0f8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

Jun 23 13:15:35.008: INFO: ReplicaSet "test-deployment-5c5999c99b":
&ReplicaSet{ObjectMeta:{test-deployment-5c5999c99b  deployment-2684  99cc4bc1-b9eb-46ef-b2e1-5e17fcd6eccc 3744 3 2022-06-23 13:14:43 +0000 UTC <nil> <nil> map[pod-template-hash:5c5999c99b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 7c9f1674-1ed9-47ec-8ca1-cb61a86384ef 0xc0036147d7 0xc0036147d8}] [] [{kube-controller-manager Update apps/v1 2022-06-23 13:15:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7c9f1674-1ed9-47ec-8ca1-cb61a86384ef\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-06-23 13:15:27 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 5c5999c99b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:5c5999c99b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/agnhost:2.39 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003614860 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}

Jun 23 13:15:35.014: INFO: pod: "test-deployment-5c5999c99b-2jb8b":
&Pod{ObjectMeta:{test-deployment-5c5999c99b-2jb8b test-deployment-5c5999c99b- deployment-2684  8e2cf414-893d-4ceb-b242-d59fbf811dfd 4064 0 2022-06-23 13:14:43 +0000 UTC 2022-06-23 13:15:28 +0000 UTC 0xc000405b10 map[pod-template-hash:5c5999c99b test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-5c5999c99b 99cc4bc1-b9eb-46ef-b2e1-5e17fcd6eccc 0xc000d3a007 0xc000d3a008}] [] [{kube-controller-manager Update v1 2022-06-23 13:14:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99cc4bc1-b9eb-46ef-b2e1-5e17fcd6eccc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 13:15:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.3.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7dgd9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7dgd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-central1-a-gl7l,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:14:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:15:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:15:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:14:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.16.5,PodIP:100.96.3.9,StartTime:2022-06-23 13:14:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2022-06-23 13:14:58 +0000 UTC,FinishedAt:2022-06-23 13:15:27 +0000 UTC,ContainerID:containerd://dc16748843e99c53249f372e212b31ee1e6fec1944c1bf4bdc68f02c73692d9f,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.39,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://dc16748843e99c53249f372e212b31ee1e6fec1944c1bf4bdc68f02c73692d9f,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.3.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

Jun 23 13:15:35.014: INFO: ReplicaSet "test-deployment-7df46cf5c9":
&ReplicaSet{ObjectMeta:{test-deployment-7df46cf5c9  deployment-2684  adb8a2b4-de45-4ce8-8f12-09d76cce2ed0 4063 4 2022-06-23 13:15:12 +0000 UTC <nil> <nil> map[pod-template-hash:7df46cf5c9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 7c9f1674-1ed9-47ec-8ca1-cb61a86384ef 0xc0036148c7 0xc0036148c8}] [] [{kube-controller-manager Update apps/v1 2022-06-23 13:15:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7c9f1674-1ed9-47ec-8ca1-cb61a86384ef\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-06-23 13:15:34 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7df46cf5c9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:7df46cf5c9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/pause:3.7 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003614950 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}

Jun 23 13:15:35.022: INFO: pod: "test-deployment-7df46cf5c9-4dqmn":
&Pod{ObjectMeta:{test-deployment-7df46cf5c9-4dqmn test-deployment-7df46cf5c9- deployment-2684  e74b136e-871d-499f-a1c2-959b2c582a9b 4061 0 2022-06-23 13:15:12 +0000 UTC 2022-06-23 13:15:35 +0000 UTC 0xc000cc8610 map[pod-template-hash:7df46cf5c9 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-7df46cf5c9 adb8a2b4-de45-4ce8-8f12-09d76cce2ed0 0xc000cc8647 0xc000cc8648}] [] [{kube-controller-manager Update v1 2022-06-23 13:15:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adb8a2b4-de45-4ce8-8f12-09d76cce2ed0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 13:15:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.47\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q9qq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q9qq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-central1-a-g3vq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:15:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:15:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:15:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 13:15:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.16.3,PodIP:100.96.1.47,StartTime:2022-06-23 13:15:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-23 13:15:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.7,ImageID:registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c,ContainerID:containerd://ab68db7f00be824cb1edf02e35f1534bb8242cae392953396660f0d062678058,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.47,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 7 lines ...
• [SLOW TEST:51.773 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:35.060: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 132 lines ...
• [SLOW TEST:35.198 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:35.810: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
... skipping 229 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:36.176: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 180 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:15:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6382" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":3,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:36.624: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 50 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jun 23 13:15:32.503: INFO: Waiting up to 5m0s for pod "downward-api-1e2a340d-fbc6-4f25-a5b9-75dc62dac443" in namespace "downward-api-9423" to be "Succeeded or Failed"
Jun 23 13:15:32.509: INFO: Pod "downward-api-1e2a340d-fbc6-4f25-a5b9-75dc62dac443": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250006ms
Jun 23 13:15:34.515: INFO: Pod "downward-api-1e2a340d-fbc6-4f25-a5b9-75dc62dac443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012043042s
Jun 23 13:15:36.519: INFO: Pod "downward-api-1e2a340d-fbc6-4f25-a5b9-75dc62dac443": Phase="Running", Reason="", readiness=true. Elapsed: 4.016189426s
Jun 23 13:15:38.516: INFO: Pod "downward-api-1e2a340d-fbc6-4f25-a5b9-75dc62dac443": Phase="Running", Reason="", readiness=true. Elapsed: 6.013209956s
Jun 23 13:15:40.518: INFO: Pod "downward-api-1e2a340d-fbc6-4f25-a5b9-75dc62dac443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.015153457s
STEP: Saw pod success
Jun 23 13:15:40.518: INFO: Pod "downward-api-1e2a340d-fbc6-4f25-a5b9-75dc62dac443" satisfied condition "Succeeded or Failed"
Jun 23 13:15:40.522: INFO: Trying to get logs from node nodes-us-central1-a-g3vq pod downward-api-1e2a340d-fbc6-4f25-a5b9-75dc62dac443 container dapi-container: <nil>
STEP: delete the pod
Jun 23 13:15:40.576: INFO: Waiting for pod downward-api-1e2a340d-fbc6-4f25-a5b9-75dc62dac443 to disappear
Jun 23 13:15:40.591: INFO: Pod downward-api-1e2a340d-fbc6-4f25-a5b9-75dc62dac443 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.144 seconds]
[sig-node] Downward API
test/e2e/common/node/framework.go:23
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:40.633: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver emptydir doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 29 lines ...
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 23 13:15:36.120: INFO: Waiting up to 5m0s for pod "security-context-eef1f0c5-3270-4451-b7cf-83c3fc7591cd" in namespace "security-context-9419" to be "Succeeded or Failed"
Jun 23 13:15:36.128: INFO: Pod "security-context-eef1f0c5-3270-4451-b7cf-83c3fc7591cd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.904747ms
Jun 23 13:15:38.143: INFO: Pod "security-context-eef1f0c5-3270-4451-b7cf-83c3fc7591cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022288042s
Jun 23 13:15:40.138: INFO: Pod "security-context-eef1f0c5-3270-4451-b7cf-83c3fc7591cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017956574s
Jun 23 13:15:42.139: INFO: Pod "security-context-eef1f0c5-3270-4451-b7cf-83c3fc7591cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019111061s
STEP: Saw pod success
Jun 23 13:15:42.139: INFO: Pod "security-context-eef1f0c5-3270-4451-b7cf-83c3fc7591cd" satisfied condition "Succeeded or Failed"
Jun 23 13:15:42.149: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod security-context-eef1f0c5-3270-4451-b7cf-83c3fc7591cd container test-container: <nil>
STEP: delete the pod
Jun 23 13:15:42.188: INFO: Waiting for pod security-context-eef1f0c5-3270-4451-b7cf-83c3fc7591cd to disappear
Jun 23 13:15:42.194: INFO: Pod security-context-eef1f0c5-3270-4451-b7cf-83c3fc7591cd no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.127 seconds]
[sig-node] Security Context
test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:42.224: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/framework/framework.go:187

... skipping 129 lines ...
• [SLOW TEST:12.089 seconds]
[sig-apps] ReplicationController
test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:44.741: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:187

... skipping 52 lines ...
• [SLOW TEST:16.811 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":7,"skipped":93,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:44.869: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 264 lines ...
Jun 23 13:15:36.832: INFO: PersistentVolumeClaim pvc-tnvgw found but phase is Pending instead of Bound.
Jun 23 13:15:38.911: INFO: PersistentVolumeClaim pvc-tnvgw found and phase=Bound (12.139325851s)
Jun 23 13:15:38.911: INFO: Waiting up to 3m0s for PersistentVolume local-hmqtf to have phase Bound
Jun 23 13:15:38.947: INFO: PersistentVolume local-hmqtf found and phase=Bound (36.002901ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-knlw
STEP: Creating a pod to test subpath
Jun 23 13:15:38.974: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-knlw" in namespace "provisioning-2033" to be "Succeeded or Failed"
Jun 23 13:15:38.987: INFO: Pod "pod-subpath-test-preprovisionedpv-knlw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.626198ms
Jun 23 13:15:40.992: INFO: Pod "pod-subpath-test-preprovisionedpv-knlw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018522384s
Jun 23 13:15:42.992: INFO: Pod "pod-subpath-test-preprovisionedpv-knlw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017551381s
Jun 23 13:15:44.995: INFO: Pod "pod-subpath-test-preprovisionedpv-knlw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021021078s
Jun 23 13:15:46.991: INFO: Pod "pod-subpath-test-preprovisionedpv-knlw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016965782s
STEP: Saw pod success
Jun 23 13:15:46.991: INFO: Pod "pod-subpath-test-preprovisionedpv-knlw" satisfied condition "Succeeded or Failed"
Jun 23 13:15:46.995: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-subpath-test-preprovisionedpv-knlw container test-container-subpath-preprovisionedpv-knlw: <nil>
STEP: delete the pod
Jun 23 13:15:47.016: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-knlw to disappear
Jun 23 13:15:47.020: INFO: Pod pod-subpath-test-preprovisionedpv-knlw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-knlw
Jun 23 13:15:47.020: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-knlw" in namespace "provisioning-2033"
... skipping 62 lines ...
• [SLOW TEST:12.138 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Server request timeout
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 7 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:15:47.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-2452" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":4,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 34 lines ...
Jun 23 13:15:38.708: INFO: PersistentVolumeClaim pvc-27tg6 found but phase is Pending instead of Bound.
Jun 23 13:15:40.713: INFO: PersistentVolumeClaim pvc-27tg6 found and phase=Bound (10.039465077s)
Jun 23 13:15:40.713: INFO: Waiting up to 3m0s for PersistentVolume local-fwvxs to have phase Bound
Jun 23 13:15:40.724: INFO: PersistentVolume local-fwvxs found and phase=Bound (10.490039ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rb6d
STEP: Creating a pod to test subpath
Jun 23 13:15:40.744: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rb6d" in namespace "provisioning-1838" to be "Succeeded or Failed"
Jun 23 13:15:40.760: INFO: Pod "pod-subpath-test-preprovisionedpv-rb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.881982ms
Jun 23 13:15:42.764: INFO: Pod "pod-subpath-test-preprovisionedpv-rb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019827397s
Jun 23 13:15:44.764: INFO: Pod "pod-subpath-test-preprovisionedpv-rb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019783432s
Jun 23 13:15:46.768: INFO: Pod "pod-subpath-test-preprovisionedpv-rb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02356992s
Jun 23 13:15:48.767: INFO: Pod "pod-subpath-test-preprovisionedpv-rb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022116934s
Jun 23 13:15:50.768: INFO: Pod "pod-subpath-test-preprovisionedpv-rb6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02396699s
STEP: Saw pod success
Jun 23 13:15:50.768: INFO: Pod "pod-subpath-test-preprovisionedpv-rb6d" satisfied condition "Succeeded or Failed"
Jun 23 13:15:50.773: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-subpath-test-preprovisionedpv-rb6d container test-container-subpath-preprovisionedpv-rb6d: <nil>
STEP: delete the pod
Jun 23 13:15:50.797: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rb6d to disappear
Jun 23 13:15:50.805: INFO: Pod pod-subpath-test-preprovisionedpv-rb6d no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rb6d
Jun 23 13:15:50.805: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rb6d" in namespace "provisioning-1838"
... skipping 30 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:221
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:51.185: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 87 lines ...
• [SLOW TEST:6.130 seconds]
[sig-network] EndpointSliceMirroring
test/e2e/network/common/framework.go:23
  should mirror a custom Endpoints resource through create update and delete [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 13:15:44.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2570b4bc-7646-40ca-b611-1a9fc0b03015" in namespace "downward-api-4752" to be "Succeeded or Failed"
Jun 23 13:15:44.838: INFO: Pod "downwardapi-volume-2570b4bc-7646-40ca-b611-1a9fc0b03015": Phase="Pending", Reason="", readiness=false. Elapsed: 14.456681ms
Jun 23 13:15:46.845: INFO: Pod "downwardapi-volume-2570b4bc-7646-40ca-b611-1a9fc0b03015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021081215s
Jun 23 13:15:48.847: INFO: Pod "downwardapi-volume-2570b4bc-7646-40ca-b611-1a9fc0b03015": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023300942s
Jun 23 13:15:50.849: INFO: Pod "downwardapi-volume-2570b4bc-7646-40ca-b611-1a9fc0b03015": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024717793s
Jun 23 13:15:52.852: INFO: Pod "downwardapi-volume-2570b4bc-7646-40ca-b611-1a9fc0b03015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.028320262s
STEP: Saw pod success
Jun 23 13:15:52.852: INFO: Pod "downwardapi-volume-2570b4bc-7646-40ca-b611-1a9fc0b03015" satisfied condition "Succeeded or Failed"
Jun 23 13:15:52.860: INFO: Trying to get logs from node nodes-us-central1-a-g3vq pod downwardapi-volume-2570b4bc-7646-40ca-b611-1a9fc0b03015 container client-container: <nil>
STEP: delete the pod
Jun 23 13:15:52.902: INFO: Waiting for pod downwardapi-volume-2570b4bc-7646-40ca-b611-1a9fc0b03015 to disappear
Jun 23 13:15:52.914: INFO: Pod downwardapi-volume-2570b4bc-7646-40ca-b611-1a9fc0b03015 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.207 seconds]
[sig-storage] Downward API volume
test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:52.988: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/framework/framework.go:187

... skipping 21 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir volume type on node default medium
Jun 23 13:15:45.022: INFO: Waiting up to 5m0s for pod "pod-2ee34857-88f0-4f18-9a12-52fec54b0b39" in namespace "emptydir-7077" to be "Succeeded or Failed"
Jun 23 13:15:45.027: INFO: Pod "pod-2ee34857-88f0-4f18-9a12-52fec54b0b39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.562265ms
Jun 23 13:15:47.030: INFO: Pod "pod-2ee34857-88f0-4f18-9a12-52fec54b0b39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008082374s
Jun 23 13:15:49.031: INFO: Pod "pod-2ee34857-88f0-4f18-9a12-52fec54b0b39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008992774s
Jun 23 13:15:51.031: INFO: Pod "pod-2ee34857-88f0-4f18-9a12-52fec54b0b39": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008720541s
Jun 23 13:15:53.054: INFO: Pod "pod-2ee34857-88f0-4f18-9a12-52fec54b0b39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.032134171s
STEP: Saw pod success
Jun 23 13:15:53.054: INFO: Pod "pod-2ee34857-88f0-4f18-9a12-52fec54b0b39" satisfied condition "Succeeded or Failed"
Jun 23 13:15:53.098: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-2ee34857-88f0-4f18-9a12-52fec54b0b39 container test-container: <nil>
STEP: delete the pod
Jun 23 13:15:53.164: INFO: Waiting for pod pod-2ee34857-88f0-4f18-9a12-52fec54b0b39 to disappear
Jun 23 13:15:53.177: INFO: Pod pod-2ee34857-88f0-4f18-9a12-52fec54b0b39 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 105 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:256
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:257
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":58,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:54.714: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 58 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:15:47.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
• [SLOW TEST:10.065 seconds]
[sig-auth] ServiceAccounts
test/e2e/auth/framework.go:23
  no secret-based service account token should be auto-generated
  test/e2e/auth/service_accounts.go:56
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated","total":-1,"completed":4,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:57.314: INFO: Only supported for providers [azure] (not gce)
... skipping 14 lines ...
      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:1577
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:15:42.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 38 lines ...
• [SLOW TEST:15.219 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:57.615: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 210 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:18.072 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]","total":-1,"completed":4,"skipped":49,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:15:58.873: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 144 lines ...
• [SLOW TEST:78.586 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should be restarted with a GRPC liveness probe [NodeConformance]
  test/e2e/common/node/container_probe.go:543
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 43 lines ...
• [SLOW TEST:24.069 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:00.827: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/framework/framework.go:187

... skipping 191 lines ...
    test/e2e/network/networking.go:384

    Requires at least 2 nodes (not 0)

    test/e2e/framework/network/utils.go:782
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":45,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:02.077: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 74 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 41 lines ...
• [SLOW TEST:27.625 seconds]
[sig-api-machinery] Aggregator
test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":3,"skipped":47,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 51 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl expose
  test/e2e/kubectl/kubectl.go:1398
    should create services for rc  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:03.667: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-bd7ec205-87af-417f-9cc6-20e47ed608dd
STEP: Creating a pod to test consume secrets
Jun 23 13:15:57.780: INFO: Waiting up to 5m0s for pod "pod-secrets-310647a0-87ec-407a-86b8-914f0a19416f" in namespace "secrets-3134" to be "Succeeded or Failed"
Jun 23 13:15:57.789: INFO: Pod "pod-secrets-310647a0-87ec-407a-86b8-914f0a19416f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.989371ms
Jun 23 13:15:59.809: INFO: Pod "pod-secrets-310647a0-87ec-407a-86b8-914f0a19416f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029391392s
Jun 23 13:16:01.793: INFO: Pod "pod-secrets-310647a0-87ec-407a-86b8-914f0a19416f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01316758s
Jun 23 13:16:03.801: INFO: Pod "pod-secrets-310647a0-87ec-407a-86b8-914f0a19416f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021862016s
STEP: Saw pod success
Jun 23 13:16:03.802: INFO: Pod "pod-secrets-310647a0-87ec-407a-86b8-914f0a19416f" satisfied condition "Succeeded or Failed"
Jun 23 13:16:03.814: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-secrets-310647a0-87ec-407a-86b8-914f0a19416f container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 13:16:03.870: INFO: Waiting for pod pod-secrets-310647a0-87ec-407a-86b8-914f0a19416f to disappear
Jun 23 13:16:03.880: INFO: Pod pod-secrets-310647a0-87ec-407a-86b8-914f0a19416f no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.194 seconds]
[sig-storage] Secrets
test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:03.940: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:187

... skipping 71 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-file]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:2079
------------------------------
... skipping 34 lines ...
Jun 23 13:15:49.682: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:49.690: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:49.729: INFO: Unable to read jessie_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:49.735: INFO: Unable to read jessie_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:49.749: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:49.756: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:49.814: INFO: Lookups using dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85 failed for: [wheezy_udp@dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_udp@dns-test-service.dns-5749.svc.cluster.local jessie_tcp@dns-test-service.dns-5749.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local]

Jun 23 13:15:54.823: INFO: Unable to read wheezy_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:54.830: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:54.846: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:54.855: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:54.931: INFO: Unable to read jessie_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:54.956: INFO: Unable to read jessie_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:54.980: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:54.986: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:55.010: INFO: Lookups using dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85 failed for: [wheezy_udp@dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_udp@dns-test-service.dns-5749.svc.cluster.local jessie_tcp@dns-test-service.dns-5749.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local]

Jun 23 13:15:59.828: INFO: Unable to read wheezy_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:59.849: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:59.876: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:59.888: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:59.942: INFO: Unable to read jessie_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:59.949: INFO: Unable to read jessie_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:59.956: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:59.970: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85: the server could not find the requested resource (get pods dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85)
Jun 23 13:15:59.999: INFO: Lookups using dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85 failed for: [wheezy_udp@dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_udp@dns-test-service.dns-5749.svc.cluster.local jessie_tcp@dns-test-service.dns-5749.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local]

Jun 23 13:16:04.921: INFO: DNS probes using dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:37.599 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:05.185: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 66 lines ...
• [SLOW TEST:12.175 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should observe PodDisruptionBudget status updated [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":5,"skipped":78,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 38 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 23 13:15:42.151: INFO: File wheezy_udp@dns-test-service-3.dns-1317.svc.cluster.local from pod  dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 13:15:42.163: INFO: File jessie_udp@dns-test-service-3.dns-1317.svc.cluster.local from pod  dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 13:15:42.163: INFO: Lookups using dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 failed for: [wheezy_udp@dns-test-service-3.dns-1317.svc.cluster.local jessie_udp@dns-test-service-3.dns-1317.svc.cluster.local]

Jun 23 13:15:47.183: INFO: File wheezy_udp@dns-test-service-3.dns-1317.svc.cluster.local from pod  dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 13:15:47.190: INFO: File jessie_udp@dns-test-service-3.dns-1317.svc.cluster.local from pod  dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 13:15:47.190: INFO: Lookups using dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 failed for: [wheezy_udp@dns-test-service-3.dns-1317.svc.cluster.local jessie_udp@dns-test-service-3.dns-1317.svc.cluster.local]

Jun 23 13:15:52.187: INFO: File wheezy_udp@dns-test-service-3.dns-1317.svc.cluster.local from pod  dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 13:15:52.197: INFO: File jessie_udp@dns-test-service-3.dns-1317.svc.cluster.local from pod  dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 13:15:52.197: INFO: Lookups using dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 failed for: [wheezy_udp@dns-test-service-3.dns-1317.svc.cluster.local jessie_udp@dns-test-service-3.dns-1317.svc.cluster.local]

Jun 23 13:15:57.168: INFO: File wheezy_udp@dns-test-service-3.dns-1317.svc.cluster.local from pod  dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 13:15:57.173: INFO: File jessie_udp@dns-test-service-3.dns-1317.svc.cluster.local from pod  dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 13:15:57.173: INFO: Lookups using dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 failed for: [wheezy_udp@dns-test-service-3.dns-1317.svc.cluster.local jessie_udp@dns-test-service-3.dns-1317.svc.cluster.local]

Jun 23 13:16:02.185: INFO: DNS probes using dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1317.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1317.svc.cluster.local; sleep 1; done
... skipping 24 lines ...
• [SLOW TEST:42.501 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":4,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:10.466: INFO: Only supported for providers [aws] (not gce)
... skipping 146 lines ...
Jun 23 13:15:22.981: INFO: PersistentVolumeClaim csi-hostpathgbdfp found but phase is Pending instead of Bound.
Jun 23 13:15:24.986: INFO: PersistentVolumeClaim csi-hostpathgbdfp found but phase is Pending instead of Bound.
Jun 23 13:15:26.994: INFO: PersistentVolumeClaim csi-hostpathgbdfp found but phase is Pending instead of Bound.
Jun 23 13:15:29.000: INFO: PersistentVolumeClaim csi-hostpathgbdfp found and phase=Bound (46.214504465s)
STEP: Creating pod pod-subpath-test-dynamicpv-9dq9
STEP: Creating a pod to test subpath
Jun 23 13:15:29.012: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9dq9" in namespace "provisioning-2027" to be "Succeeded or Failed"
Jun 23 13:15:29.016: INFO: Pod "pod-subpath-test-dynamicpv-9dq9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046259ms
Jun 23 13:15:31.021: INFO: Pod "pod-subpath-test-dynamicpv-9dq9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008773425s
Jun 23 13:15:33.025: INFO: Pod "pod-subpath-test-dynamicpv-9dq9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013014147s
Jun 23 13:15:35.022: INFO: Pod "pod-subpath-test-dynamicpv-9dq9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009769254s
Jun 23 13:15:37.021: INFO: Pod "pod-subpath-test-dynamicpv-9dq9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008603597s
Jun 23 13:15:39.038: INFO: Pod "pod-subpath-test-dynamicpv-9dq9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025631981s
... skipping 2 lines ...
Jun 23 13:15:45.023: INFO: Pod "pod-subpath-test-dynamicpv-9dq9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.010983s
Jun 23 13:15:47.021: INFO: Pod "pod-subpath-test-dynamicpv-9dq9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.0087105s
Jun 23 13:15:49.021: INFO: Pod "pod-subpath-test-dynamicpv-9dq9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.008650524s
Jun 23 13:15:51.021: INFO: Pod "pod-subpath-test-dynamicpv-9dq9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.009058977s
Jun 23 13:15:53.040: INFO: Pod "pod-subpath-test-dynamicpv-9dq9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.0274271s
STEP: Saw pod success
Jun 23 13:15:53.040: INFO: Pod "pod-subpath-test-dynamicpv-9dq9" satisfied condition "Succeeded or Failed"
Jun 23 13:15:53.067: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-subpath-test-dynamicpv-9dq9 container test-container-subpath-dynamicpv-9dq9: <nil>
STEP: delete the pod
Jun 23 13:15:53.184: INFO: Waiting for pod pod-subpath-test-dynamicpv-9dq9 to disappear
Jun 23 13:15:53.195: INFO: Pod pod-subpath-test-dynamicpv-9dq9 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9dq9
Jun 23 13:15:53.195: INFO: Deleting pod "pod-subpath-test-dynamicpv-9dq9" in namespace "provisioning-2027"
... skipping 61 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:382
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 127 lines ...
  test/e2e/storage/csi_mock_volume.go:1636
    should modify fsGroup if fsGroupPolicy=File
    test/e2e/storage/csi_mock_volume.go:1660
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":1,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:10.690: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 33 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:16:11.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-8967" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":5,"skipped":100,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:11.455: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 26 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:16:12.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-402" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":6,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:12.288: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 122 lines ...
Jun 23 13:15:22.745: INFO: PersistentVolumeClaim csi-hostpathx756p found but phase is Pending instead of Bound.
Jun 23 13:15:24.750: INFO: PersistentVolumeClaim csi-hostpathx756p found but phase is Pending instead of Bound.
Jun 23 13:15:26.757: INFO: PersistentVolumeClaim csi-hostpathx756p found but phase is Pending instead of Bound.
Jun 23 13:15:28.763: INFO: PersistentVolumeClaim csi-hostpathx756p found and phase=Bound (46.144002671s)
STEP: Creating pod pod-subpath-test-dynamicpv-z5xp
STEP: Creating a pod to test subpath
Jun 23 13:15:28.785: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-z5xp" in namespace "provisioning-5103" to be "Succeeded or Failed"
Jun 23 13:15:28.794: INFO: Pod "pod-subpath-test-dynamicpv-z5xp": Phase="Pending", Reason="", readiness=false. Elapsed: 9.164974ms
Jun 23 13:15:30.799: INFO: Pod "pod-subpath-test-dynamicpv-z5xp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014303494s
Jun 23 13:15:32.809: INFO: Pod "pod-subpath-test-dynamicpv-z5xp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024769234s
Jun 23 13:15:34.799: INFO: Pod "pod-subpath-test-dynamicpv-z5xp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01459447s
Jun 23 13:15:36.806: INFO: Pod "pod-subpath-test-dynamicpv-z5xp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021827028s
Jun 23 13:15:38.804: INFO: Pod "pod-subpath-test-dynamicpv-z5xp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019400708s
... skipping 3 lines ...
Jun 23 13:15:46.800: INFO: Pod "pod-subpath-test-dynamicpv-z5xp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.015678419s
Jun 23 13:15:48.800: INFO: Pod "pod-subpath-test-dynamicpv-z5xp": Phase="Pending", Reason="", readiness=false. Elapsed: 20.015401508s
Jun 23 13:15:50.805: INFO: Pod "pod-subpath-test-dynamicpv-z5xp": Phase="Pending", Reason="", readiness=false. Elapsed: 22.020170688s
Jun 23 13:15:52.807: INFO: Pod "pod-subpath-test-dynamicpv-z5xp": Phase="Pending", Reason="", readiness=false. Elapsed: 24.022713737s
Jun 23 13:15:54.798: INFO: Pod "pod-subpath-test-dynamicpv-z5xp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.013227152s
STEP: Saw pod success
Jun 23 13:15:54.798: INFO: Pod "pod-subpath-test-dynamicpv-z5xp" satisfied condition "Succeeded or Failed"
Jun 23 13:15:54.801: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-subpath-test-dynamicpv-z5xp container test-container-volume-dynamicpv-z5xp: <nil>
STEP: delete the pod
Jun 23 13:15:54.819: INFO: Waiting for pod pod-subpath-test-dynamicpv-z5xp to disappear
Jun 23 13:15:54.824: INFO: Pod pod-subpath-test-dynamicpv-z5xp no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-z5xp
Jun 23 13:15:54.824: INFO: Deleting pod "pod-subpath-test-dynamicpv-z5xp" in namespace "provisioning-5103"
... skipping 61 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      test/e2e/storage/testsuites/subpath.go:207
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":11,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:12.369: INFO: Only supported for providers [openstack] (not gce)
... skipping 83 lines ...
• [SLOW TEST:14.895 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:13.168: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-map-df25156f-18be-442e-bd4c-6fd7c8a9bbfc
STEP: Creating a pod to test consume configMaps
Jun 23 13:16:07.071: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-718da02e-11c8-4854-b02f-c6ccd7d712f9" in namespace "projected-4633" to be "Succeeded or Failed"
Jun 23 13:16:07.079: INFO: Pod "pod-projected-configmaps-718da02e-11c8-4854-b02f-c6ccd7d712f9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.330383ms
Jun 23 13:16:09.085: INFO: Pod "pod-projected-configmaps-718da02e-11c8-4854-b02f-c6ccd7d712f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01411219s
Jun 23 13:16:11.085: INFO: Pod "pod-projected-configmaps-718da02e-11c8-4854-b02f-c6ccd7d712f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013598567s
Jun 23 13:16:13.091: INFO: Pod "pod-projected-configmaps-718da02e-11c8-4854-b02f-c6ccd7d712f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020261586s
Jun 23 13:16:15.083: INFO: Pod "pod-projected-configmaps-718da02e-11c8-4854-b02f-c6ccd7d712f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.012186449s
STEP: Saw pod success
Jun 23 13:16:15.083: INFO: Pod "pod-projected-configmaps-718da02e-11c8-4854-b02f-c6ccd7d712f9" satisfied condition "Succeeded or Failed"
Jun 23 13:16:15.090: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-projected-configmaps-718da02e-11c8-4854-b02f-c6ccd7d712f9 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 13:16:15.124: INFO: Waiting for pod pod-projected-configmaps-718da02e-11c8-4854-b02f-c6ccd7d712f9 to disappear
Jun 23 13:16:15.136: INFO: Pod pod-projected-configmaps-718da02e-11c8-4854-b02f-c6ccd7d712f9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.124 seconds]
[sig-storage] Projected configMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":80,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 16 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:16:15.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-6531" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":7,"skipped":83,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:15.369: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 101 lines ...
• [SLOW TEST:21.877 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  test/e2e/network/service.go:1657
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":5,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:19.233: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 87 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-210d5c97-c59a-41b1-a282-f835fc808f01
STEP: Creating a pod to test consume secrets
Jun 23 13:16:13.238: INFO: Waiting up to 5m0s for pod "pod-secrets-f806eac9-7ab8-48a2-9a27-7118a37f8c93" in namespace "secrets-6820" to be "Succeeded or Failed"
Jun 23 13:16:13.244: INFO: Pod "pod-secrets-f806eac9-7ab8-48a2-9a27-7118a37f8c93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124875ms
Jun 23 13:16:15.254: INFO: Pod "pod-secrets-f806eac9-7ab8-48a2-9a27-7118a37f8c93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016010441s
Jun 23 13:16:17.249: INFO: Pod "pod-secrets-f806eac9-7ab8-48a2-9a27-7118a37f8c93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010674876s
Jun 23 13:16:19.249: INFO: Pod "pod-secrets-f806eac9-7ab8-48a2-9a27-7118a37f8c93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010355489s
STEP: Saw pod success
Jun 23 13:16:19.249: INFO: Pod "pod-secrets-f806eac9-7ab8-48a2-9a27-7118a37f8c93" satisfied condition "Succeeded or Failed"
Jun 23 13:16:19.252: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-secrets-f806eac9-7ab8-48a2-9a27-7118a37f8c93 container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 13:16:19.277: INFO: Waiting for pod pod-secrets-f806eac9-7ab8-48a2-9a27-7118a37f8c93 to disappear
Jun 23 13:16:19.281: INFO: Pod pod-secrets-f806eac9-7ab8-48a2-9a27-7118a37f8c93 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
... skipping 6 lines ...
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":45,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:19.361: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-30fee712-2681-418e-b2a8-ca928026b355
STEP: Creating a pod to test consume configMaps
Jun 23 13:16:05.342: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d" in namespace "configmap-3172" to be "Succeeded or Failed"
Jun 23 13:16:05.351: INFO: Pod "pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.338507ms
Jun 23 13:16:07.355: INFO: Pod "pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013432658s
Jun 23 13:16:09.357: INFO: Pod "pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015491242s
Jun 23 13:16:11.355: INFO: Pod "pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013388092s
Jun 23 13:16:13.357: INFO: Pod "pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015052233s
Jun 23 13:16:15.355: INFO: Pod "pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.01296976s
Jun 23 13:16:17.355: INFO: Pod "pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.013113868s
Jun 23 13:16:19.355: INFO: Pod "pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.013471503s
STEP: Saw pod success
Jun 23 13:16:19.355: INFO: Pod "pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d" satisfied condition "Succeeded or Failed"
Jun 23 13:16:19.359: INFO: Trying to get logs from node nodes-us-central1-a-g3vq pod pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d container agnhost-container: <nil>
STEP: delete the pod
Jun 23 13:16:19.393: INFO: Waiting for pod pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d to disappear
Jun 23 13:16:19.405: INFO: Pod pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 24 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:16:19.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-1394" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":6,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:16.103 seconds]
[sig-apps] ReplicationController
test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a private image
  test/e2e/apps/rc.go:70
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a private image","total":-1,"completed":4,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:19.662: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 32 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:16:19.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-7192" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":62,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:19.811: INFO: Only supported for providers [azure] (not gce)
... skipping 35 lines ...
      Driver local doesn't support ext3 -- skipping

      test/e2e/storage/framework/testsuite.go:121
------------------------------
SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] version v1
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:16:19.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 96 lines ...
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:232
Jun 23 13:15:30.019: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 13:15:30.061: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2354" in namespace "provisioning-2354" to be "Succeeded or Failed"
Jun 23 13:15:30.074: INFO: Pod "hostpath-symlink-prep-provisioning-2354": Phase="Pending", Reason="", readiness=false. Elapsed: 12.956103ms
Jun 23 13:15:32.078: INFO: Pod "hostpath-symlink-prep-provisioning-2354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016732171s
Jun 23 13:15:34.080: INFO: Pod "hostpath-symlink-prep-provisioning-2354": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01831418s
Jun 23 13:15:36.082: INFO: Pod "hostpath-symlink-prep-provisioning-2354": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020958985s
Jun 23 13:15:38.092: INFO: Pod "hostpath-symlink-prep-provisioning-2354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.030047414s
STEP: Saw pod success
Jun 23 13:15:38.092: INFO: Pod "hostpath-symlink-prep-provisioning-2354" satisfied condition "Succeeded or Failed"
Jun 23 13:15:38.092: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2354" in namespace "provisioning-2354"
Jun 23 13:15:38.122: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2354" to be fully deleted
Jun 23 13:15:38.128: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-qtx8
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 13:15:38.137: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qtx8" in namespace "provisioning-2354" to be "Succeeded or Failed"
Jun 23 13:15:38.149: INFO: Pod "pod-subpath-test-inlinevolume-qtx8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.064648ms
Jun 23 13:15:40.154: INFO: Pod "pod-subpath-test-inlinevolume-qtx8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016631387s
Jun 23 13:15:42.162: INFO: Pod "pod-subpath-test-inlinevolume-qtx8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024953105s
Jun 23 13:15:44.158: INFO: Pod "pod-subpath-test-inlinevolume-qtx8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020426793s
Jun 23 13:15:46.153: INFO: Pod "pod-subpath-test-inlinevolume-qtx8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015887102s
Jun 23 13:15:48.154: INFO: Pod "pod-subpath-test-inlinevolume-qtx8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016269276s
... skipping 7 lines ...
Jun 23 13:16:04.169: INFO: Pod "pod-subpath-test-inlinevolume-qtx8": Phase="Running", Reason="", readiness=false. Elapsed: 26.031491311s
Jun 23 13:16:06.154: INFO: Pod "pod-subpath-test-inlinevolume-qtx8": Phase="Running", Reason="", readiness=false. Elapsed: 28.016786305s
Jun 23 13:16:08.154: INFO: Pod "pod-subpath-test-inlinevolume-qtx8": Phase="Running", Reason="", readiness=false. Elapsed: 30.016299049s
Jun 23 13:16:10.154: INFO: Pod "pod-subpath-test-inlinevolume-qtx8": Phase="Running", Reason="", readiness=false. Elapsed: 32.016698285s
Jun 23 13:16:12.240: INFO: Pod "pod-subpath-test-inlinevolume-qtx8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.103007047s
STEP: Saw pod success
Jun 23 13:16:12.241: INFO: Pod "pod-subpath-test-inlinevolume-qtx8" satisfied condition "Succeeded or Failed"
Jun 23 13:16:12.272: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-subpath-test-inlinevolume-qtx8 container test-container-subpath-inlinevolume-qtx8: <nil>
STEP: delete the pod
Jun 23 13:16:12.380: INFO: Waiting for pod pod-subpath-test-inlinevolume-qtx8 to disappear
Jun 23 13:16:12.391: INFO: Pod pod-subpath-test-inlinevolume-qtx8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-qtx8
Jun 23 13:16:12.391: INFO: Deleting pod "pod-subpath-test-inlinevolume-qtx8" in namespace "provisioning-2354"
STEP: Deleting pod
Jun 23 13:16:12.396: INFO: Deleting pod "pod-subpath-test-inlinevolume-qtx8" in namespace "provisioning-2354"
Jun 23 13:16:12.463: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2354" in namespace "provisioning-2354" to be "Succeeded or Failed"
Jun 23 13:16:12.474: INFO: Pod "hostpath-symlink-prep-provisioning-2354": Phase="Pending", Reason="", readiness=false. Elapsed: 11.054931ms
Jun 23 13:16:14.479: INFO: Pod "hostpath-symlink-prep-provisioning-2354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016116016s
Jun 23 13:16:16.478: INFO: Pod "hostpath-symlink-prep-provisioning-2354": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014369926s
Jun 23 13:16:18.478: INFO: Pod "hostpath-symlink-prep-provisioning-2354": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015170005s
Jun 23 13:16:20.484: INFO: Pod "hostpath-symlink-prep-provisioning-2354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020737998s
STEP: Saw pod success
Jun 23 13:16:20.484: INFO: Pod "hostpath-symlink-prep-provisioning-2354" satisfied condition "Succeeded or Failed"
Jun 23 13:16:20.484: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2354" in namespace "provisioning-2354"
Jun 23 13:16:20.507: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2354" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187
Jun 23 13:16:20.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2354" for this suite.
... skipping 6 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:232
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-47332dd7-2511-46d6-bc89-2b01dc3c38ef
STEP: Creating a pod to test consume secrets
Jun 23 13:16:12.506: INFO: Waiting up to 5m0s for pod "pod-secrets-746f85c8-f487-469c-9e97-b70a05b646dc" in namespace "secrets-7483" to be "Succeeded or Failed"
Jun 23 13:16:12.520: INFO: Pod "pod-secrets-746f85c8-f487-469c-9e97-b70a05b646dc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.27719ms
Jun 23 13:16:14.525: INFO: Pod "pod-secrets-746f85c8-f487-469c-9e97-b70a05b646dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018789484s
Jun 23 13:16:16.525: INFO: Pod "pod-secrets-746f85c8-f487-469c-9e97-b70a05b646dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018806468s
Jun 23 13:16:18.524: INFO: Pod "pod-secrets-746f85c8-f487-469c-9e97-b70a05b646dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018322865s
Jun 23 13:16:20.537: INFO: Pod "pod-secrets-746f85c8-f487-469c-9e97-b70a05b646dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.031085564s
STEP: Saw pod success
Jun 23 13:16:20.537: INFO: Pod "pod-secrets-746f85c8-f487-469c-9e97-b70a05b646dc" satisfied condition "Succeeded or Failed"
Jun 23 13:16:20.550: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-secrets-746f85c8-f487-469c-9e97-b70a05b646dc container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 13:16:20.595: INFO: Waiting for pod pod-secrets-746f85c8-f487-469c-9e97-b70a05b646dc to disappear
Jun 23 13:16:20.603: INFO: Pod pod-secrets-746f85c8-f487-469c-9e97-b70a05b646dc no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.212 seconds]
[sig-storage] Secrets
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:20.654: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 33 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:16:20.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4775" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:20.791: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 90 lines ...
  test/e2e/storage/subpath.go:40
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating pod pod-subpath-test-downwardapi-hwqf
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 13:15:53.190: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hwqf" in namespace "subpath-8952" to be "Succeeded or Failed"
Jun 23 13:15:53.199: INFO: Pod "pod-subpath-test-downwardapi-hwqf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.619859ms
Jun 23 13:15:55.205: INFO: Pod "pod-subpath-test-downwardapi-hwqf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01491715s
Jun 23 13:15:57.205: INFO: Pod "pod-subpath-test-downwardapi-hwqf": Phase="Running", Reason="", readiness=true. Elapsed: 4.014889729s
Jun 23 13:15:59.203: INFO: Pod "pod-subpath-test-downwardapi-hwqf": Phase="Running", Reason="", readiness=true. Elapsed: 6.013596359s
Jun 23 13:16:01.214: INFO: Pod "pod-subpath-test-downwardapi-hwqf": Phase="Running", Reason="", readiness=true. Elapsed: 8.024465401s
Jun 23 13:16:03.204: INFO: Pod "pod-subpath-test-downwardapi-hwqf": Phase="Running", Reason="", readiness=true. Elapsed: 10.013877561s
... skipping 4 lines ...
Jun 23 13:16:13.205: INFO: Pod "pod-subpath-test-downwardapi-hwqf": Phase="Running", Reason="", readiness=true. Elapsed: 20.015398852s
Jun 23 13:16:15.209: INFO: Pod "pod-subpath-test-downwardapi-hwqf": Phase="Running", Reason="", readiness=true. Elapsed: 22.019033597s
Jun 23 13:16:17.206: INFO: Pod "pod-subpath-test-downwardapi-hwqf": Phase="Running", Reason="", readiness=true. Elapsed: 24.015717444s
Jun 23 13:16:19.208: INFO: Pod "pod-subpath-test-downwardapi-hwqf": Phase="Running", Reason="", readiness=true. Elapsed: 26.017984834s
Jun 23 13:16:21.205: INFO: Pod "pod-subpath-test-downwardapi-hwqf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.015499343s
STEP: Saw pod success
Jun 23 13:16:21.205: INFO: Pod "pod-subpath-test-downwardapi-hwqf" satisfied condition "Succeeded or Failed"
Jun 23 13:16:21.209: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-subpath-test-downwardapi-hwqf container test-container-subpath-downwardapi-hwqf: <nil>
STEP: delete the pod
Jun 23 13:16:21.236: INFO: Waiting for pod pod-subpath-test-downwardapi-hwqf to disappear
Jun 23 13:16:21.239: INFO: Pod pod-subpath-test-downwardapi-hwqf no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-hwqf
Jun 23 13:16:21.239: INFO: Deleting pod "pod-subpath-test-downwardapi-hwqf" in namespace "subpath-8952"
... skipping 8 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:36
    should support subpaths with downward pod [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:21.266: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/framework/framework.go:187

... skipping 149 lines ...
• [SLOW TEST:20.114 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 23 lines ...
  test/e2e/common/node/runtime.go:43
    on terminated container
    test/e2e/common/node/runtime.go:136
      should report termination message if TerminationMessagePath is set [NodeConformance]
      test/e2e/common/node/runtime.go:173
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":8,"skipped":93,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:22.549: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 79 lines ...
      test/e2e/storage/testsuites/volume_expand.go:176

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:15:19.796: INFO: >>> kubeConfig: /root/.kube/config
... skipping 59 lines ...
Jun 23 13:15:24.340: INFO: PersistentVolumeClaim csi-hostpath2jsz7 found but phase is Pending instead of Bound.
Jun 23 13:15:26.346: INFO: PersistentVolumeClaim csi-hostpath2jsz7 found but phase is Pending instead of Bound.
Jun 23 13:15:28.362: INFO: PersistentVolumeClaim csi-hostpath2jsz7 found but phase is Pending instead of Bound.
Jun 23 13:15:30.367: INFO: PersistentVolumeClaim csi-hostpath2jsz7 found and phase=Bound (10.055368691s)
STEP: Expanding non-expandable pvc
Jun 23 13:15:30.375: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jun 23 13:15:30.382: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:32.399: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:34.392: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:36.395: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:38.392: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:40.394: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:42.410: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:44.403: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:46.392: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:48.392: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:50.394: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:52.397: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:54.398: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:56.393: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:15:58.398: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:16:00.397: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 23 13:16:00.405: INFO: Error updating pvc csi-hostpath2jsz7: persistentvolumeclaims "csi-hostpath2jsz7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jun 23 13:16:00.405: INFO: Deleting PersistentVolumeClaim "csi-hostpath2jsz7"
Jun 23 13:16:00.410: INFO: Waiting up to 5m0s for PersistentVolume pvc-dcdb7dad-3efc-4a4a-b28b-f152404acf75 to get deleted
Jun 23 13:16:00.414: INFO: PersistentVolume pvc-dcdb7dad-3efc-4a4a-b28b-f152404acf75 found and phase=Bound (4.46484ms)
Jun 23 13:16:05.419: INFO: PersistentVolume pvc-dcdb7dad-3efc-4a4a-b28b-f152404acf75 was removed
STEP: Deleting sc
... skipping 53 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      test/e2e/storage/testsuites/volume_expand.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:6.286 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  pod should support memory backed volumes of specified size
  test/e2e/common/storage/empty_dir.go:298
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":6,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:25.667: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 45 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun 23 13:16:22.315: INFO: Waiting up to 5m0s for pod "pod-600a4e7b-647b-4914-a5f4-f7083167521e" in namespace "emptydir-2805" to be "Succeeded or Failed"
Jun 23 13:16:22.326: INFO: Pod "pod-600a4e7b-647b-4914-a5f4-f7083167521e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.293028ms
Jun 23 13:16:24.331: INFO: Pod "pod-600a4e7b-647b-4914-a5f4-f7083167521e": Phase="Running", Reason="", readiness=false. Elapsed: 2.016311077s
Jun 23 13:16:26.331: INFO: Pod "pod-600a4e7b-647b-4914-a5f4-f7083167521e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016074063s
STEP: Saw pod success
Jun 23 13:16:26.331: INFO: Pod "pod-600a4e7b-647b-4914-a5f4-f7083167521e" satisfied condition "Succeeded or Failed"
Jun 23 13:16:26.337: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-600a4e7b-647b-4914-a5f4-f7083167521e container test-container: <nil>
STEP: delete the pod
Jun 23 13:16:26.370: INFO: Waiting for pod pod-600a4e7b-647b-4914-a5f4-f7083167521e to disappear
Jun 23 13:16:26.381: INFO: Pod pod-600a4e7b-647b-4914-a5f4-f7083167521e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 13:16:26.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2805" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 71 lines ...
• [SLOW TEST:6.246 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  should validate Deployment Status endpoints [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":5,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:27.588: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 129 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":78,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] PreStop
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:30.161 seconds]
[sig-node] PreStop
test/e2e/node/framework.go:23
  graceful pod terminated should wait until preStop hook completes the process
  test/e2e/node/pre_stop.go:172
------------------------------
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":2,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:29.940: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support existing directory
  test/e2e/storage/testsuites/subpath.go:207
Jun 23 13:16:10.746: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 13:16:10.769: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3742" in namespace "provisioning-3742" to be "Succeeded or Failed"
Jun 23 13:16:10.778: INFO: Pod "hostpath-symlink-prep-provisioning-3742": Phase="Pending", Reason="", readiness=false. Elapsed: 8.774548ms
Jun 23 13:16:12.785: INFO: Pod "hostpath-symlink-prep-provisioning-3742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01521553s
Jun 23 13:16:14.784: INFO: Pod "hostpath-symlink-prep-provisioning-3742": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014145403s
Jun 23 13:16:16.785: INFO: Pod "hostpath-symlink-prep-provisioning-3742": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016003416s
Jun 23 13:16:18.786: INFO: Pod "hostpath-symlink-prep-provisioning-3742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016143296s
STEP: Saw pod success
Jun 23 13:16:18.786: INFO: Pod "hostpath-symlink-prep-provisioning-3742" satisfied condition "Succeeded or Failed"
Jun 23 13:16:18.786: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3742" in namespace "provisioning-3742"
Jun 23 13:16:18.803: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3742" to be fully deleted
Jun 23 13:16:18.808: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-kxjv
STEP: Creating a pod to test subpath
Jun 23 13:16:18.819: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kxjv" in namespace "provisioning-3742" to be "Succeeded or Failed"
Jun 23 13:16:18.824: INFO: Pod "pod-subpath-test-inlinevolume-kxjv": Phase="Pending", Reason="", readiness=false. Elapsed: 5.331306ms
Jun 23 13:16:20.876: INFO: Pod "pod-subpath-test-inlinevolume-kxjv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056340461s
Jun 23 13:16:22.833: INFO: Pod "pod-subpath-test-inlinevolume-kxjv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014254334s
Jun 23 13:16:24.830: INFO: Pod "pod-subpath-test-inlinevolume-kxjv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010654548s
Jun 23 13:16:26.829: INFO: Pod "pod-subpath-test-inlinevolume-kxjv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.009789822s
STEP: Saw pod success
Jun 23 13:16:26.829: INFO: Pod "pod-subpath-test-inlinevolume-kxjv" satisfied condition "Succeeded or Failed"
Jun 23 13:16:26.833: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-subpath-test-inlinevolume-kxjv container test-container-volume-inlinevolume-kxjv: <nil>
STEP: delete the pod
Jun 23 13:16:26.852: INFO: Waiting for pod pod-subpath-test-inlinevolume-kxjv to disappear
Jun 23 13:16:26.856: INFO: Pod pod-subpath-test-inlinevolume-kxjv no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-kxjv
Jun 23 13:16:26.856: INFO: Deleting pod "pod-subpath-test-inlinevolume-kxjv" in namespace "provisioning-3742"
STEP: Deleting pod
Jun 23 13:16:26.859: INFO: Deleting pod "pod-subpath-test-inlinevolume-kxjv" in namespace "provisioning-3742"
Jun 23 13:16:26.871: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3742" in namespace "provisioning-3742" to be "Succeeded or Failed"
Jun 23 13:16:26.875: INFO: Pod "hostpath-symlink-prep-provisioning-3742": Phase="Pending", Reason="", readiness=false. Elapsed: 3.322009ms
Jun 23 13:16:28.895: INFO: Pod "hostpath-symlink-prep-provisioning-3742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023434609s
Jun 23 13:16:30.880: INFO: Pod "hostpath-symlink-prep-provisioning-3742": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009193577s
Jun 23 13:16:32.880: INFO: Pod "hostpath-symlink-prep-provisioning-3742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.008860426s
STEP: Saw pod success
Jun 23 13:16:32.880: INFO: Pod "hostpath-symlink-prep-provisioning-3742" satisfied condition "Succeeded or Failed"
Jun 23 13:16:32.880: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3742" in namespace "provisioning-3742"
Jun 23 13:16:32.899: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3742" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187
Jun 23 13:16:32.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3742" for this suite.
... skipping 6 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      test/e2e/storage/testsuites/subpath.go:207
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:32.927: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 132 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  test/e2e/common/storage/host_path.go:39
[It] should support r/w [NodeConformance]
  test/e2e/common/storage/host_path.go:67
STEP: Creating a pod to test hostPath r/w
Jun 23 13:16:27.686: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1481" to be "Succeeded or Failed"
Jun 23 13:16:27.695: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.21298ms
Jun 23 13:16:29.709: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023485807s
Jun 23 13:16:31.700: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014165903s
Jun 23 13:16:33.705: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019528746s
STEP: Saw pod success
Jun 23 13:16:33.706: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jun 23 13:16:33.713: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Jun 23 13:16:33.733: INFO: Waiting for pod pod-host-path-test to disappear
Jun 23 13:16:33.737: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.138 seconds]
[sig-storage] HostPath
test/e2e/common/storage/framework.go:23
  should support r/w [NodeConformance]
  test/e2e/common/storage/host_path.go:67
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":6,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 91 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:16:34.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-8112" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":5,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:34.619: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 246 lines ...
• [SLOW TEST:14.667 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:38.581: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 87 lines ...
Jun 23 13:16:29.595: INFO: ExecWithOptions: Clientset creation
Jun 23 13:16:29.595: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/sctp-1870/pods/hostexec-nodes-us-central1-a-g3vq-w2vmk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 23 13:16:29.720: INFO: exec nodes-us-central1-a-g3vq: command:   lsmod | grep sctp
Jun 23 13:16:29.720: INFO: exec nodes-us-central1-a-g3vq: stdout:    ""
Jun 23 13:16:29.720: INFO: exec nodes-us-central1-a-g3vq: stderr:    ""
Jun 23 13:16:29.720: INFO: exec nodes-us-central1-a-g3vq: exit code: 0
Jun 23 13:16:29.720: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Jun 23 13:16:29.720: INFO: the sctp module is not loaded on node: nodes-us-central1-a-g3vq
STEP: Deleting pod hostexec-nodes-us-central1-a-g3vq-w2vmk in namespace sctp-1870
STEP: creating a pod with hostport on the selected node
STEP: Launching the pod on node nodes-us-central1-a-g3vq
Jun 23 13:16:29.742: INFO: Waiting up to 5m0s for pod "hostport" in namespace "sctp-1870" to be "running and ready"
Jun 23 13:16:29.747: INFO: Pod "hostport": Phase="Pending", Reason="", readiness=false. Elapsed: 5.716611ms
... skipping 35 lines ...
• [SLOW TEST:20.516 seconds]
[sig-network] SCTP [LinuxOnly]
test/e2e/network/common/framework.go:23
  should create a Pod with SCTP HostPort
  test/e2e/network/service.go:4124
------------------------------
{"msg":"PASSED [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort","total":-1,"completed":7,"skipped":39,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:40.050: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 71 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  test/e2e/common/node/security_context.go:101
Jun 23 13:16:33.218: INFO: Waiting up to 5m0s for pod "busybox-user-0-dfa19db9-f82c-4596-9395-c3749e59a6c7" in namespace "security-context-test-3165" to be "Succeeded or Failed"
Jun 23 13:16:33.227: INFO: Pod "busybox-user-0-dfa19db9-f82c-4596-9395-c3749e59a6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.944916ms
Jun 23 13:16:35.234: INFO: Pod "busybox-user-0-dfa19db9-f82c-4596-9395-c3749e59a6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015949198s
Jun 23 13:16:37.234: INFO: Pod "busybox-user-0-dfa19db9-f82c-4596-9395-c3749e59a6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015737285s
Jun 23 13:16:39.234: INFO: Pod "busybox-user-0-dfa19db9-f82c-4596-9395-c3749e59a6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016230686s
Jun 23 13:16:41.233: INFO: Pod "busybox-user-0-dfa19db9-f82c-4596-9395-c3749e59a6c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01452054s
Jun 23 13:16:41.233: INFO: Pod "busybox-user-0-dfa19db9-f82c-4596-9395-c3749e59a6c7" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 13:16:41.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3165" for this suite.


... skipping 2 lines ...
test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  test/e2e/common/node/security_context.go:52
    should run the container with uid 0 [LinuxOnly] [NodeConformance]
    test/e2e/common/node/security_context.go:101
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] Events API
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 22 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:16:41.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-687" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":4,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:41.433: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 65 lines ...
  test/e2e/common/node/runtime.go:43
    when running a container with a new image
    test/e2e/common/node/runtime.go:259
      should be able to pull image [NodeConformance]
      test/e2e/common/node/runtime.go:375
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":6,"skipped":71,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":3,"skipped":43,"failed":0}
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:16:34.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
• [SLOW TEST:8.345 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":4,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:43.029: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 90 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 13:16:40.173: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98ae5106-b165-4f14-b75c-016c41d39b4c" in namespace "projected-5666" to be "Succeeded or Failed"
Jun 23 13:16:40.186: INFO: Pod "downwardapi-volume-98ae5106-b165-4f14-b75c-016c41d39b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.016627ms
Jun 23 13:16:42.191: INFO: Pod "downwardapi-volume-98ae5106-b165-4f14-b75c-016c41d39b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017911067s
Jun 23 13:16:44.190: INFO: Pod "downwardapi-volume-98ae5106-b165-4f14-b75c-016c41d39b4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017058994s
STEP: Saw pod success
Jun 23 13:16:44.190: INFO: Pod "downwardapi-volume-98ae5106-b165-4f14-b75c-016c41d39b4c" satisfied condition "Succeeded or Failed"
Jun 23 13:16:44.193: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod downwardapi-volume-98ae5106-b165-4f14-b75c-016c41d39b4c container client-container: <nil>
STEP: delete the pod
Jun 23 13:16:44.216: INFO: Waiting for pod downwardapi-volume-98ae5106-b165-4f14-b75c-016c41d39b4c to disappear
Jun 23 13:16:44.220: INFO: Pod downwardapi-volume-98ae5106-b165-4f14-b75c-016c41d39b4c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jun 23 13:16:44.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5666" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:44.251: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 94 lines ...
Jun 23 13:16:19.935: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-8244 create -f -'
Jun 23 13:16:21.050: INFO: stderr: ""
Jun 23 13:16:21.050: INFO: stdout: "pod/httpd created\n"
Jun 23 13:16:21.050: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 13:16:21.050: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-8244" to be "running and ready"
Jun 23 13:16:21.059: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.66072ms
Jun 23 13:16:21.059: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending'
Jun 23 13:16:23.071: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020753883s
Jun 23 13:16:23.071: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-pp7m' to be 'Running' but was 'Pending'
Jun 23 13:16:25.084: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033814348s
Jun 23 13:16:25.084: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-pp7m' to be 'Running' but was 'Pending'
Jun 23 13:16:27.062: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.012244799s
Jun 23 13:16:27.062: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-pp7m' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC  }]
Jun 23 13:16:29.064: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.013787445s
Jun 23 13:16:29.064: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-pp7m' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC  }]
Jun 23 13:16:31.163: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.113104463s
Jun 23 13:16:31.163: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-pp7m' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC  }]
Jun 23 13:16:33.067: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.017089441s
Jun 23 13:16:33.067: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-pp7m' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:21 +0000 UTC  }]
Jun 23 13:16:35.063: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 14.012642998s
Jun 23 13:16:35.063: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 13:16:35.063: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] running a failing command
  test/e2e/kubectl/kubectl.go:547
Jun 23 13:16:35.063: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-8244 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-2 --restart=Never --pod-running-timeout=2m0s failure-1 -- /bin/sh -c exit 42'
... skipping 23 lines ...
  test/e2e/kubectl/kubectl.go:407
    should return command exit codes
    test/e2e/kubectl/kubectl.go:527
      running a failing command
      test/e2e/kubectl/kubectl.go:547
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","total":-1,"completed":6,"skipped":78,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":5,"skipped":22,"failed":0}
[BeforeEach] [sig-network] Conntrack
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:16:20.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 61 lines ...
• [SLOW TEST:30.353 seconds]
[sig-network] Conntrack
test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  test/e2e/network/conntrack.go:208
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":6,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:50.392: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 282 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] provisioning
    test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      test/e2e/storage/testsuites/provisioning.go:428
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":1,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 108 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:256
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:257
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:54.557: INFO: Only supported for providers [openstack] (not gce)
... skipping 39 lines ...
Jun 23 13:16:44.691: INFO: The phase of Pod server-envvars-d027234e-971f-4184-b952-c3fe12dbcd08 is Pending, waiting for it to be Running (with Ready = true)
Jun 23 13:16:46.690: INFO: Pod "server-envvars-d027234e-971f-4184-b952-c3fe12dbcd08": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009892709s
Jun 23 13:16:46.690: INFO: The phase of Pod server-envvars-d027234e-971f-4184-b952-c3fe12dbcd08 is Pending, waiting for it to be Running (with Ready = true)
Jun 23 13:16:48.691: INFO: Pod "server-envvars-d027234e-971f-4184-b952-c3fe12dbcd08": Phase="Running", Reason="", readiness=true. Elapsed: 10.010886714s
Jun 23 13:16:48.691: INFO: The phase of Pod server-envvars-d027234e-971f-4184-b952-c3fe12dbcd08 is Running (Ready = true)
Jun 23 13:16:48.691: INFO: Pod "server-envvars-d027234e-971f-4184-b952-c3fe12dbcd08" satisfied condition "running and ready"
Jun 23 13:16:48.722: INFO: Waiting up to 5m0s for pod "client-envvars-a3eb8c34-77c9-4063-bb0f-f149e7cca6cf" in namespace "pods-5522" to be "Succeeded or Failed"
Jun 23 13:16:48.740: INFO: Pod "client-envvars-a3eb8c34-77c9-4063-bb0f-f149e7cca6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 17.993241ms
Jun 23 13:16:50.753: INFO: Pod "client-envvars-a3eb8c34-77c9-4063-bb0f-f149e7cca6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031688479s
Jun 23 13:16:52.745: INFO: Pod "client-envvars-a3eb8c34-77c9-4063-bb0f-f149e7cca6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023355912s
Jun 23 13:16:54.745: INFO: Pod "client-envvars-a3eb8c34-77c9-4063-bb0f-f149e7cca6cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023248725s
STEP: Saw pod success
Jun 23 13:16:54.745: INFO: Pod "client-envvars-a3eb8c34-77c9-4063-bb0f-f149e7cca6cf" satisfied condition "Succeeded or Failed"
Jun 23 13:16:54.748: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod client-envvars-a3eb8c34-77c9-4063-bb0f-f149e7cca6cf container env3cont: <nil>
STEP: delete the pod
Jun 23 13:16:54.763: INFO: Waiting for pod client-envvars-a3eb8c34-77c9-4063-bb0f-f149e7cca6cf to disappear
Jun 23 13:16:54.768: INFO: Pod client-envvars-a3eb8c34-77c9-4063-bb0f-f149e7cca6cf no longer exists
[AfterEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:16.128 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:54.790: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 188 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:447
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:16:55.914: INFO: Driver local doesn't support ext3 -- skipping
... skipping 195 lines ...
test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  test/e2e/storage/csi_mock_volume.go:1636
    should not modify fsGroup if fsGroupPolicy=None
    test/e2e/storage/csi_mock_volume.go:1660
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":5,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 25 lines ...
Jun 23 13:16:38.339: INFO: PersistentVolumeClaim pvc-29qd5 found but phase is Pending instead of Bound.
Jun 23 13:16:40.345: INFO: PersistentVolumeClaim pvc-29qd5 found and phase=Bound (6.031955381s)
Jun 23 13:16:40.345: INFO: Waiting up to 3m0s for PersistentVolume local-t5gd5 to have phase Bound
Jun 23 13:16:40.352: INFO: PersistentVolume local-t5gd5 found and phase=Bound (7.000836ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hvv9
STEP: Creating a pod to test subpath
Jun 23 13:16:40.369: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hvv9" in namespace "provisioning-5935" to be "Succeeded or Failed"
Jun 23 13:16:40.377: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.749446ms
Jun 23 13:16:42.384: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01525584s
Jun 23 13:16:44.387: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018521167s
Jun 23 13:16:46.382: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013725386s
Jun 23 13:16:48.385: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016467968s
Jun 23 13:16:50.382: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.012976023s
STEP: Saw pod success
Jun 23 13:16:50.382: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9" satisfied condition "Succeeded or Failed"
Jun 23 13:16:50.386: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-subpath-test-preprovisionedpv-hvv9 container test-container-subpath-preprovisionedpv-hvv9: <nil>
STEP: delete the pod
Jun 23 13:16:50.424: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hvv9 to disappear
Jun 23 13:16:50.430: INFO: Pod pod-subpath-test-preprovisionedpv-hvv9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hvv9
Jun 23 13:16:50.430: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hvv9" in namespace "provisioning-5935"
STEP: Creating pod pod-subpath-test-preprovisionedpv-hvv9
STEP: Creating a pod to test subpath
Jun 23 13:16:50.442: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hvv9" in namespace "provisioning-5935" to be "Succeeded or Failed"
Jun 23 13:16:50.449: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.556935ms
Jun 23 13:16:52.457: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015671222s
Jun 23 13:16:54.453: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011712338s
Jun 23 13:16:56.455: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013511781s
Jun 23 13:16:58.455: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.013255346s
STEP: Saw pod success
Jun 23 13:16:58.455: INFO: Pod "pod-subpath-test-preprovisionedpv-hvv9" satisfied condition "Succeeded or Failed"
Jun 23 13:16:58.459: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-subpath-test-preprovisionedpv-hvv9 container test-container-subpath-preprovisionedpv-hvv9: <nil>
STEP: delete the pod
Jun 23 13:16:58.475: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hvv9 to disappear
Jun 23 13:16:58.478: INFO: Pod pod-subpath-test-preprovisionedpv-hvv9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hvv9
Jun 23 13:16:58.478: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hvv9" in namespace "provisioning-5935"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      test/e2e/storage/testsuites/subpath.go:397
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":16,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 53 lines ...
Jun 23 13:16:38.785: INFO: Pod "pvc-volume-tester-nv4ns": Phase="Running", Reason="", readiness=true. Elapsed: 6.019524572s
Jun 23 13:16:38.785: INFO: Pod "pvc-volume-tester-nv4ns" satisfied condition "running"
STEP: Deleting the previously created pod
Jun 23 13:16:38.785: INFO: Deleting pod "pvc-volume-tester-nv4ns" in namespace "csi-mock-volumes-3112"
Jun 23 13:16:38.794: INFO: Wait up to 5m0s for pod "pvc-volume-tester-nv4ns" to be fully deleted
STEP: Checking CSI driver logs
Jun 23 13:16:42.833: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"b23a48eb-f2f6-11ec-94d5-2a66849bb908","target_path":"/var/lib/kubelet/pods/e3f341eb-1542-4264-adc5-9b284cb99ea6/volumes/kubernetes.io~csi/pvc-06427dfb-3976-47a6-bdd3-c1e30ce6d42f/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-nv4ns
Jun 23 13:16:42.833: INFO: Deleting pod "pvc-volume-tester-nv4ns" in namespace "csi-mock-volumes-3112"
STEP: Deleting claim pvc-8xqkb
Jun 23 13:16:42.886: INFO: Waiting up to 2m0s for PersistentVolume pvc-06427dfb-3976-47a6-bdd3-c1e30ce6d42f to get deleted
Jun 23 13:16:42.905: INFO: PersistentVolume pvc-06427dfb-3976-47a6-bdd3-c1e30ce6d42f found and phase=Bound (19.001545ms)
Jun 23 13:16:44.908: INFO: PersistentVolume pvc-06427dfb-3976-47a6-bdd3-c1e30ce6d42f found and phase=Released (2.022427751s)
... skipping 45 lines ...
test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  test/e2e/storage/csi_mock_volume.go:1574
    token should not be plumbed down when CSIDriver is not deployed
    test/e2e/storage/csi_mock_volume.go:1602
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":3,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 150 lines ...
• [SLOW TEST:19.081 seconds]
[sig-node] KubeletManagedEtcHosts
test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:02.246: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/framework/framework.go:187

... skipping 23 lines ...
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:447
Jun 23 13:16:41.504: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 13:16:41.515: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9060" in namespace "provisioning-9060" to be "Succeeded or Failed"
Jun 23 13:16:41.521: INFO: Pod "hostpath-symlink-prep-provisioning-9060": Phase="Pending", Reason="", readiness=false. Elapsed: 5.050582ms
Jun 23 13:16:43.525: INFO: Pod "hostpath-symlink-prep-provisioning-9060": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008950709s
Jun 23 13:16:45.529: INFO: Pod "hostpath-symlink-prep-provisioning-9060": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01361895s
Jun 23 13:16:47.526: INFO: Pod "hostpath-symlink-prep-provisioning-9060": Phase="Running", Reason="", readiness=true. Elapsed: 6.010248878s
Jun 23 13:16:49.527: INFO: Pod "hostpath-symlink-prep-provisioning-9060": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.011019841s
STEP: Saw pod success
Jun 23 13:16:49.527: INFO: Pod "hostpath-symlink-prep-provisioning-9060" satisfied condition "Succeeded or Failed"
Jun 23 13:16:49.527: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9060" in namespace "provisioning-9060"
Jun 23 13:16:49.534: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9060" to be fully deleted
Jun 23 13:16:49.537: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-xkqt
Jun 23 13:16:49.548: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-xkqt" in namespace "provisioning-9060" to be "running"
Jun 23 13:16:49.559: INFO: Pod "pod-subpath-test-inlinevolume-xkqt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.545284ms
... skipping 5 lines ...
Jun 23 13:16:53.767: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-xkqt
Jun 23 13:16:53.767: INFO: Deleting pod "pod-subpath-test-inlinevolume-xkqt" in namespace "provisioning-9060"
Jun 23 13:16:53.783: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-xkqt" to be fully deleted
STEP: Deleting pod
Jun 23 13:16:59.796: INFO: Deleting pod "pod-subpath-test-inlinevolume-xkqt" in namespace "provisioning-9060"
Jun 23 13:16:59.805: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9060" in namespace "provisioning-9060" to be "Succeeded or Failed"
Jun 23 13:16:59.812: INFO: Pod "hostpath-symlink-prep-provisioning-9060": Phase="Pending", Reason="", readiness=false. Elapsed: 6.927275ms
Jun 23 13:17:01.816: INFO: Pod "hostpath-symlink-prep-provisioning-9060": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011236758s
Jun 23 13:17:03.820: INFO: Pod "hostpath-symlink-prep-provisioning-9060": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015717312s
Jun 23 13:17:05.816: INFO: Pod "hostpath-symlink-prep-provisioning-9060": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010829399s
STEP: Saw pod success
Jun 23 13:17:05.816: INFO: Pod "hostpath-symlink-prep-provisioning-9060" satisfied condition "Succeeded or Failed"
Jun 23 13:17:05.816: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9060" in namespace "provisioning-9060"
Jun 23 13:17:05.826: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9060" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187
Jun 23 13:17:05.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9060" for this suite.
... skipping 6 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:447
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:05.862: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 108 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:1577
------------------------------
... skipping 31 lines ...
Jun 23 13:16:52.108: INFO: PersistentVolumeClaim pvc-h2gr7 found but phase is Pending instead of Bound.
Jun 23 13:16:54.113: INFO: PersistentVolumeClaim pvc-h2gr7 found and phase=Bound (4.014731694s)
Jun 23 13:16:54.113: INFO: Waiting up to 3m0s for PersistentVolume local-v9x4h to have phase Bound
Jun 23 13:16:54.116: INFO: PersistentVolume local-v9x4h found and phase=Bound (3.498795ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wnrl
STEP: Creating a pod to test subpath
Jun 23 13:16:54.139: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wnrl" in namespace "provisioning-7446" to be "Succeeded or Failed"
Jun 23 13:16:54.142: INFO: Pod "pod-subpath-test-preprovisionedpv-wnrl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.894685ms
Jun 23 13:16:56.146: INFO: Pod "pod-subpath-test-preprovisionedpv-wnrl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007017399s
Jun 23 13:16:58.148: INFO: Pod "pod-subpath-test-preprovisionedpv-wnrl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00952175s
Jun 23 13:17:00.152: INFO: Pod "pod-subpath-test-preprovisionedpv-wnrl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013462276s
Jun 23 13:17:02.155: INFO: Pod "pod-subpath-test-preprovisionedpv-wnrl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0162654s
Jun 23 13:17:04.147: INFO: Pod "pod-subpath-test-preprovisionedpv-wnrl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008066298s
Jun 23 13:17:06.156: INFO: Pod "pod-subpath-test-preprovisionedpv-wnrl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.017414305s
STEP: Saw pod success
Jun 23 13:17:06.156: INFO: Pod "pod-subpath-test-preprovisionedpv-wnrl" satisfied condition "Succeeded or Failed"
Jun 23 13:17:06.160: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-subpath-test-preprovisionedpv-wnrl container test-container-volume-preprovisionedpv-wnrl: <nil>
STEP: delete the pod
Jun 23 13:17:06.185: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wnrl to disappear
Jun 23 13:17:06.189: INFO: Pod pod-subpath-test-preprovisionedpv-wnrl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wnrl
Jun 23 13:17:06.189: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wnrl" in namespace "provisioning-7446"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":72,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:06.381: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 35 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:17:06.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8341" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":8,"skipped":77,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:06.572: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 192 lines ...
Jun 23 13:16:52.877: INFO: PersistentVolumeClaim pvc-w6r8h found but phase is Pending instead of Bound.
Jun 23 13:16:54.881: INFO: PersistentVolumeClaim pvc-w6r8h found and phase=Bound (4.015357982s)
Jun 23 13:16:54.881: INFO: Waiting up to 3m0s for PersistentVolume local-5gpvs to have phase Bound
Jun 23 13:16:54.884: INFO: PersistentVolume local-5gpvs found and phase=Bound (2.636388ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7v8g
STEP: Creating a pod to test subpath
Jun 23 13:16:54.900: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7v8g" in namespace "provisioning-1459" to be "Succeeded or Failed"
Jun 23 13:16:54.906: INFO: Pod "pod-subpath-test-preprovisionedpv-7v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156548ms
Jun 23 13:16:56.914: INFO: Pod "pod-subpath-test-preprovisionedpv-7v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01348325s
Jun 23 13:16:58.911: INFO: Pod "pod-subpath-test-preprovisionedpv-7v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010440231s
Jun 23 13:17:00.922: INFO: Pod "pod-subpath-test-preprovisionedpv-7v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021712036s
Jun 23 13:17:02.912: INFO: Pod "pod-subpath-test-preprovisionedpv-7v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012217386s
Jun 23 13:17:04.919: INFO: Pod "pod-subpath-test-preprovisionedpv-7v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018512245s
Jun 23 13:17:06.910: INFO: Pod "pod-subpath-test-preprovisionedpv-7v8g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.010217344s
STEP: Saw pod success
Jun 23 13:17:06.911: INFO: Pod "pod-subpath-test-preprovisionedpv-7v8g" satisfied condition "Succeeded or Failed"
Jun 23 13:17:06.923: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-subpath-test-preprovisionedpv-7v8g container test-container-subpath-preprovisionedpv-7v8g: <nil>
STEP: delete the pod
Jun 23 13:17:06.942: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7v8g to disappear
Jun 23 13:17:06.945: INFO: Pod pod-subpath-test-preprovisionedpv-7v8g no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7v8g
Jun 23 13:17:06.946: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7v8g" in namespace "provisioning-1459"
... skipping 34 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:367
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":79,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:07.483: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 104 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:17:07.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8796" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":8,"skipped":98,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:07.689: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 32 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:17:08.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-8008" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:08.110: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/framework/framework.go:187

... skipping 70 lines ...
[It] should support existing directory
  test/e2e/storage/testsuites/subpath.go:207
Jun 23 13:17:02.294: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 13:17:02.299: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-mlq8
STEP: Creating a pod to test subpath
Jun 23 13:17:02.311: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-mlq8" in namespace "provisioning-9474" to be "Succeeded or Failed"
Jun 23 13:17:02.320: INFO: Pod "pod-subpath-test-inlinevolume-mlq8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.846613ms
Jun 23 13:17:04.326: INFO: Pod "pod-subpath-test-inlinevolume-mlq8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014130764s
Jun 23 13:17:06.331: INFO: Pod "pod-subpath-test-inlinevolume-mlq8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019776064s
Jun 23 13:17:08.326: INFO: Pod "pod-subpath-test-inlinevolume-mlq8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014404056s
STEP: Saw pod success
Jun 23 13:17:08.326: INFO: Pod "pod-subpath-test-inlinevolume-mlq8" satisfied condition "Succeeded or Failed"
Jun 23 13:17:08.330: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-subpath-test-inlinevolume-mlq8 container test-container-volume-inlinevolume-mlq8: <nil>
STEP: delete the pod
Jun 23 13:17:08.359: INFO: Waiting for pod pod-subpath-test-inlinevolume-mlq8 to disappear
Jun 23 13:17:08.364: INFO: Pod pod-subpath-test-inlinevolume-mlq8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-mlq8
Jun 23 13:17:08.364: INFO: Deleting pod "pod-subpath-test-inlinevolume-mlq8" in namespace "provisioning-9474"
... skipping 12 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      test/e2e/storage/testsuites/subpath.go:207
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":66,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:08.412: INFO: Only supported for providers [vsphere] (not gce)
... skipping 180 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  test/e2e/common/storage/projected_secret.go:92
STEP: Creating projection with secret that has name projected-secret-test-33d32fac-598a-4c36-b4e1-baa3386aa678
STEP: Creating a pod to test consume secrets
Jun 23 13:16:58.755: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991" in namespace "projected-8222" to be "Succeeded or Failed"
Jun 23 13:16:58.758: INFO: Pod "pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.757415ms
Jun 23 13:17:00.763: INFO: Pod "pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00795542s
Jun 23 13:17:02.762: INFO: Pod "pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00688635s
Jun 23 13:17:04.764: INFO: Pod "pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009186715s
Jun 23 13:17:06.764: INFO: Pod "pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008601941s
Jun 23 13:17:08.774: INFO: Pod "pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019408462s
STEP: Saw pod success
Jun 23 13:17:08.775: INFO: Pod "pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991" satisfied condition "Succeeded or Failed"
Jun 23 13:17:08.789: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 13:17:08.819: INFO: Waiting for pod pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991 to disappear
Jun 23 13:17:08.828: INFO: Pod pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
... skipping 5 lines ...
• [SLOW TEST:10.153 seconds]
[sig-storage] Projected secret
test/e2e/common/storage/framework.go:23
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  test/e2e/common/storage/projected_secret.go:92
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":4,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 87 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:447
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":9,"skipped":60,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 118 lines ...
test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  test/e2e/storage/csi_mock_volume.go:639
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    test/e2e/storage/csi_mock_volume.go:668
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":9,"skipped":102,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:10.821: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 183 lines ...
• [SLOW TEST:15.502 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  test/e2e/network/service.go:1016
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":8,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
... skipping 112 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
    test/e2e/storage/framework/testsuite.go:50
      should verify that all csinodes have volume limits
      test/e2e/storage/testsuites/volumelimits.go:249
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits","total":-1,"completed":5,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 55 lines ...
• [SLOW TEST:70.189 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  test/e2e/common/node/container_probe.go:244
------------------------------
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":6,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:13.913: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 207 lines ...
Jun 23 13:17:06.816: INFO: PersistentVolumeClaim pvc-zw5p6 found but phase is Pending instead of Bound.
Jun 23 13:17:08.822: INFO: PersistentVolumeClaim pvc-zw5p6 found and phase=Bound (6.023771753s)
Jun 23 13:17:08.822: INFO: Waiting up to 3m0s for PersistentVolume local-zv4wc to have phase Bound
Jun 23 13:17:08.827: INFO: PersistentVolume local-zv4wc found and phase=Bound (4.944836ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pclk
STEP: Creating a pod to test subpath
Jun 23 13:17:08.858: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pclk" in namespace "provisioning-8766" to be "Succeeded or Failed"
Jun 23 13:17:08.864: INFO: Pod "pod-subpath-test-preprovisionedpv-pclk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.352546ms
Jun 23 13:17:10.870: INFO: Pod "pod-subpath-test-preprovisionedpv-pclk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011700632s
Jun 23 13:17:12.871: INFO: Pod "pod-subpath-test-preprovisionedpv-pclk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012661478s
Jun 23 13:17:14.870: INFO: Pod "pod-subpath-test-preprovisionedpv-pclk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011773057s
STEP: Saw pod success
Jun 23 13:17:14.870: INFO: Pod "pod-subpath-test-preprovisionedpv-pclk" satisfied condition "Succeeded or Failed"
Jun 23 13:17:14.873: INFO: Trying to get logs from node nodes-us-central1-a-g3vq pod pod-subpath-test-preprovisionedpv-pclk container test-container-volume-preprovisionedpv-pclk: <nil>
STEP: delete the pod
Jun 23 13:17:14.891: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pclk to disappear
Jun 23 13:17:14.896: INFO: Pod pod-subpath-test-preprovisionedpv-pclk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pclk
Jun 23 13:17:14.896: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pclk" in namespace "provisioning-8766"
... skipping 36 lines ...
[It] should support non-existent path
  test/e2e/storage/testsuites/subpath.go:196
Jun 23 13:17:08.228: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 23 13:17:08.228: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-288m
STEP: Creating a pod to test subpath
Jun 23 13:17:08.235: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-288m" in namespace "provisioning-5173" to be "Succeeded or Failed"
Jun 23 13:17:08.241: INFO: Pod "pod-subpath-test-inlinevolume-288m": Phase="Pending", Reason="", readiness=false. Elapsed: 5.635575ms
Jun 23 13:17:10.246: INFO: Pod "pod-subpath-test-inlinevolume-288m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010922137s
Jun 23 13:17:12.246: INFO: Pod "pod-subpath-test-inlinevolume-288m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011081961s
Jun 23 13:17:14.263: INFO: Pod "pod-subpath-test-inlinevolume-288m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028224684s
Jun 23 13:17:16.246: INFO: Pod "pod-subpath-test-inlinevolume-288m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.011031111s
STEP: Saw pod success
Jun 23 13:17:16.246: INFO: Pod "pod-subpath-test-inlinevolume-288m" satisfied condition "Succeeded or Failed"
Jun 23 13:17:16.250: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-subpath-test-inlinevolume-288m container test-container-volume-inlinevolume-288m: <nil>
STEP: delete the pod
Jun 23 13:17:16.272: INFO: Waiting for pod pod-subpath-test-inlinevolume-288m to disappear
Jun 23 13:17:16.276: INFO: Pod pod-subpath-test-inlinevolume-288m no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-288m
Jun 23 13:17:16.276: INFO: Deleting pod "pod-subpath-test-inlinevolume-288m" in namespace "provisioning-5173"
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
Jun 23 13:17:11.008: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-43573846-bded-42ad-b2b3-920cbac73d1b" in namespace "security-context-test-2032" to be "Succeeded or Failed"
Jun 23 13:17:11.011: INFO: Pod "busybox-privileged-false-43573846-bded-42ad-b2b3-920cbac73d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08132ms
Jun 23 13:17:13.020: INFO: Pod "busybox-privileged-false-43573846-bded-42ad-b2b3-920cbac73d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012775434s
Jun 23 13:17:15.014: INFO: Pod "busybox-privileged-false-43573846-bded-42ad-b2b3-920cbac73d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006640024s
Jun 23 13:17:17.017: INFO: Pod "busybox-privileged-false-43573846-bded-42ad-b2b3-920cbac73d1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009581633s
Jun 23 13:17:17.017: INFO: Pod "busybox-privileged-false-43573846-bded-42ad-b2b3-920cbac73d1b" satisfied condition "Succeeded or Failed"
Jun 23 13:17:17.024: INFO: Got logs for pod "busybox-privileged-false-43573846-bded-42ad-b2b3-920cbac73d1b": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 13:17:17.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2032" for this suite.

... skipping 3 lines ...
test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  test/e2e/common/node/security_context.go:234
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":126,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Jun 23 13:17:07.785: INFO: Waiting up to 5m0s for pod "pod-44e6d537-ca28-4f7a-9799-a0db7330f676" in namespace "emptydir-463" to be "Succeeded or Failed"
Jun 23 13:17:07.794: INFO: Pod "pod-44e6d537-ca28-4f7a-9799-a0db7330f676": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487559ms
Jun 23 13:17:09.799: INFO: Pod "pod-44e6d537-ca28-4f7a-9799-a0db7330f676": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013549689s
Jun 23 13:17:11.810: INFO: Pod "pod-44e6d537-ca28-4f7a-9799-a0db7330f676": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024377614s
Jun 23 13:17:13.799: INFO: Pod "pod-44e6d537-ca28-4f7a-9799-a0db7330f676": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013888794s
Jun 23 13:17:15.803: INFO: Pod "pod-44e6d537-ca28-4f7a-9799-a0db7330f676": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017583268s
Jun 23 13:17:17.799: INFO: Pod "pod-44e6d537-ca28-4f7a-9799-a0db7330f676": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.013730705s
STEP: Saw pod success
Jun 23 13:17:17.799: INFO: Pod "pod-44e6d537-ca28-4f7a-9799-a0db7330f676" satisfied condition "Succeeded or Failed"
Jun 23 13:17:17.802: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-44e6d537-ca28-4f7a-9799-a0db7330f676 container test-container: <nil>
STEP: delete the pod
Jun 23 13:17:17.827: INFO: Waiting for pod pod-44e6d537-ca28-4f7a-9799-a0db7330f676 to disappear
Jun 23 13:17:17.831: INFO: Pod pod-44e6d537-ca28-4f7a-9799-a0db7330f676 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 6 lines ...
test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:48
    volume on default medium should have the correct mode using FSGroup
    test/e2e/common/storage/empty_dir.go:71
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":9,"skipped":112,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 28 lines ...
Jun 23 13:17:08.420: INFO: PersistentVolumeClaim pvc-kl7gx found but phase is Pending instead of Bound.
Jun 23 13:17:10.424: INFO: PersistentVolumeClaim pvc-kl7gx found and phase=Bound (6.030204026s)
Jun 23 13:17:10.424: INFO: Waiting up to 3m0s for PersistentVolume local-hk67n to have phase Bound
Jun 23 13:17:10.427: INFO: PersistentVolume local-hk67n found and phase=Bound (2.917192ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-865f
STEP: Creating a pod to test subpath
Jun 23 13:17:10.438: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-865f" in namespace "provisioning-7349" to be "Succeeded or Failed"
Jun 23 13:17:10.442: INFO: Pod "pod-subpath-test-preprovisionedpv-865f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194258ms
Jun 23 13:17:12.447: INFO: Pod "pod-subpath-test-preprovisionedpv-865f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009197782s
Jun 23 13:17:14.446: INFO: Pod "pod-subpath-test-preprovisionedpv-865f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008504429s
Jun 23 13:17:16.449: INFO: Pod "pod-subpath-test-preprovisionedpv-865f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011087731s
Jun 23 13:17:18.447: INFO: Pod "pod-subpath-test-preprovisionedpv-865f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009735307s
Jun 23 13:17:20.455: INFO: Pod "pod-subpath-test-preprovisionedpv-865f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.017748691s
STEP: Saw pod success
Jun 23 13:17:20.456: INFO: Pod "pod-subpath-test-preprovisionedpv-865f" satisfied condition "Succeeded or Failed"
Jun 23 13:17:20.474: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-subpath-test-preprovisionedpv-865f container test-container-volume-preprovisionedpv-865f: <nil>
STEP: delete the pod
Jun 23 13:17:20.516: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-865f to disappear
Jun 23 13:17:20.524: INFO: Pod pod-subpath-test-preprovisionedpv-865f no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-865f
Jun 23 13:17:20.524: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-865f" in namespace "provisioning-7349"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      test/e2e/storage/testsuites/subpath.go:207
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 26 lines ...
Jun 23 13:17:07.513: INFO: PersistentVolumeClaim pvc-6nxjv found but phase is Pending instead of Bound.
Jun 23 13:17:09.520: INFO: PersistentVolumeClaim pvc-6nxjv found and phase=Bound (2.038264342s)
Jun 23 13:17:09.520: INFO: Waiting up to 3m0s for PersistentVolume local-gnwwz to have phase Bound
Jun 23 13:17:09.532: INFO: PersistentVolume local-gnwwz found and phase=Bound (11.890834ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-ltp8
STEP: Creating a pod to test exec-volume-test
Jun 23 13:17:09.551: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-ltp8" in namespace "volume-6215" to be "Succeeded or Failed"
Jun 23 13:17:09.557: INFO: Pod "exec-volume-test-preprovisionedpv-ltp8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.75171ms
Jun 23 13:17:11.565: INFO: Pod "exec-volume-test-preprovisionedpv-ltp8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013658684s
Jun 23 13:17:13.563: INFO: Pod "exec-volume-test-preprovisionedpv-ltp8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011407819s
Jun 23 13:17:15.561: INFO: Pod "exec-volume-test-preprovisionedpv-ltp8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009838413s
Jun 23 13:17:17.563: INFO: Pod "exec-volume-test-preprovisionedpv-ltp8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011145081s
Jun 23 13:17:19.561: INFO: Pod "exec-volume-test-preprovisionedpv-ltp8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.00990568s
Jun 23 13:17:21.563: INFO: Pod "exec-volume-test-preprovisionedpv-ltp8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.011186594s
STEP: Saw pod success
Jun 23 13:17:21.563: INFO: Pod "exec-volume-test-preprovisionedpv-ltp8" satisfied condition "Succeeded or Failed"
Jun 23 13:17:21.572: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod exec-volume-test-preprovisionedpv-ltp8 container exec-container-preprovisionedpv-ltp8: <nil>
STEP: delete the pod
Jun 23 13:17:21.602: INFO: Waiting for pod exec-volume-test-preprovisionedpv-ltp8 to disappear
Jun 23 13:17:21.606: INFO: Pod exec-volume-test-preprovisionedpv-ltp8 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-ltp8
Jun 23 13:17:21.606: INFO: Deleting pod "exec-volume-test-preprovisionedpv-ltp8" in namespace "volume-6215"
... skipping 24 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":27,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:21.892: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
... skipping 14 lines ...
      Driver emptydir doesn't support GenericEphemeralVolume -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":89,"failed":0}
[BeforeEach] [sig-node] Security Context
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:17:16.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  test/e2e/node/security_context.go:178
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 23 13:17:16.348: INFO: Waiting up to 5m0s for pod "security-context-0d61ee79-4775-46fc-941f-41c8ac2f4f50" in namespace "security-context-1760" to be "Succeeded or Failed"
Jun 23 13:17:16.352: INFO: Pod "security-context-0d61ee79-4775-46fc-941f-41c8ac2f4f50": Phase="Pending", Reason="", readiness=false. Elapsed: 3.331264ms
Jun 23 13:17:18.357: INFO: Pod "security-context-0d61ee79-4775-46fc-941f-41c8ac2f4f50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008859825s
Jun 23 13:17:20.357: INFO: Pod "security-context-0d61ee79-4775-46fc-941f-41c8ac2f4f50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008914492s
Jun 23 13:17:22.356: INFO: Pod "security-context-0d61ee79-4775-46fc-941f-41c8ac2f4f50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.007636001s
STEP: Saw pod success
Jun 23 13:17:22.356: INFO: Pod "security-context-0d61ee79-4775-46fc-941f-41c8ac2f4f50" satisfied condition "Succeeded or Failed"
Jun 23 13:17:22.359: INFO: Trying to get logs from node nodes-us-central1-a-g3vq pod security-context-0d61ee79-4775-46fc-941f-41c8ac2f4f50 container test-container: <nil>
STEP: delete the pod
Jun 23 13:17:22.388: INFO: Waiting for pod security-context-0d61ee79-4775-46fc-941f-41c8ac2f4f50 to disappear
Jun 23 13:17:22.392: INFO: Pod security-context-0d61ee79-4775-46fc-941f-41c8ac2f4f50 no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
... skipping 48 lines ...
• [SLOW TEST:16.919 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":7,"skipped":94,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:25.517: INFO: Only supported for providers [vsphere] (not gce)
... skipping 144 lines ...
Jun 23 13:17:13.721: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-4422 create -f -'
Jun 23 13:17:13.932: INFO: stderr: ""
Jun 23 13:17:13.932: INFO: stdout: "pod/httpd created\n"
Jun 23 13:17:13.932: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 13:17:13.932: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-4422" to be "running and ready"
Jun 23 13:17:13.938: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.487633ms
Jun 23 13:17:13.938: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-gl7l' to be 'Running' but was 'Pending'
Jun 23 13:17:15.955: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022494035s
Jun 23 13:17:15.955: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-gl7l' to be 'Running' but was 'Pending'
Jun 23 13:17:17.942: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009490583s
Jun 23 13:17:17.942: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-gl7l' to be 'Running' but was 'Pending'
Jun 23 13:17:19.941: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.009186269s
Jun 23 13:17:19.941: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-gl7l' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  }]
Jun 23 13:17:21.948: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.015654873s
Jun 23 13:17:21.948: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-gl7l' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  }]
Jun 23 13:17:23.943: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.010713122s
Jun 23 13:17:23.943: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-gl7l' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  }]
Jun 23 13:17:25.946: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.013426893s
Jun 23 13:17:25.946: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-gl7l' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  }]
Jun 23 13:17:27.942: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.009638567s
Jun 23 13:17:27.942: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-gl7l' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  }]
Jun 23 13:17:29.950: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.017662137s
Jun 23 13:17:29.950: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-gl7l' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:13 +0000 UTC  }]
Jun 23 13:17:31.941: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 18.00941228s
Jun 23 13:17:31.942: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 13:17:31.942: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support exec using resource/name
  test/e2e/kubectl/kubectl.go:459
STEP: executing a command in the container
... skipping 23 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:407
    should support exec using resource/name
    test/e2e/kubectl/kubectl.go:459
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":6,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:32.413: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 272 lines ...
Jun 23 13:17:23.561: INFO: PersistentVolumeClaim pvc-ll2w8 found but phase is Pending instead of Bound.
Jun 23 13:17:25.568: INFO: PersistentVolumeClaim pvc-ll2w8 found and phase=Bound (8.065680307s)
Jun 23 13:17:25.568: INFO: Waiting up to 3m0s for PersistentVolume local-tfvgw to have phase Bound
Jun 23 13:17:25.574: INFO: PersistentVolume local-tfvgw found and phase=Bound (6.278666ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-z6jp
STEP: Creating a pod to test exec-volume-test
Jun 23 13:17:25.587: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-z6jp" in namespace "volume-4249" to be "Succeeded or Failed"
Jun 23 13:17:25.598: INFO: Pod "exec-volume-test-preprovisionedpv-z6jp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.316292ms
Jun 23 13:17:27.603: INFO: Pod "exec-volume-test-preprovisionedpv-z6jp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015270978s
Jun 23 13:17:29.607: INFO: Pod "exec-volume-test-preprovisionedpv-z6jp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019756454s
Jun 23 13:17:31.603: INFO: Pod "exec-volume-test-preprovisionedpv-z6jp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015267748s
Jun 23 13:17:33.605: INFO: Pod "exec-volume-test-preprovisionedpv-z6jp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017912659s
STEP: Saw pod success
Jun 23 13:17:33.606: INFO: Pod "exec-volume-test-preprovisionedpv-z6jp" satisfied condition "Succeeded or Failed"
Jun 23 13:17:33.610: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod exec-volume-test-preprovisionedpv-z6jp container exec-container-preprovisionedpv-z6jp: <nil>
STEP: delete the pod
Jun 23 13:17:33.634: INFO: Waiting for pod exec-volume-test-preprovisionedpv-z6jp to disappear
Jun 23 13:17:33.637: INFO: Pod exec-volume-test-preprovisionedpv-z6jp no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-z6jp
Jun 23 13:17:33.637: INFO: Deleting pod "exec-volume-test-preprovisionedpv-z6jp" in namespace "volume-4249"
... skipping 19 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":117,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:33.862: INFO: Only supported for providers [openstack] (not gce)
... skipping 32 lines ...
Jun 23 13:17:17.079: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-9581 create -f -'
Jun 23 13:17:17.270: INFO: stderr: ""
Jun 23 13:17:17.270: INFO: stdout: "pod/httpd created\n"
Jun 23 13:17:17.270: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 13:17:17.270: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-9581" to be "running and ready"
Jun 23 13:17:17.275: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572128ms
Jun 23 13:17:17.275: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-hmlq' to be 'Running' but was 'Pending'
Jun 23 13:17:19.279: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008748833s
Jun 23 13:17:19.279: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-hmlq' to be 'Running' but was 'Pending'
Jun 23 13:17:21.280: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009924553s
Jun 23 13:17:21.280: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-hmlq' to be 'Running' but was 'Pending'
Jun 23 13:17:23.281: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.010802361s
Jun 23 13:17:23.281: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-hmlq' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC  }]
Jun 23 13:17:25.279: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.008975792s
Jun 23 13:17:25.280: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-hmlq' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC  }]
Jun 23 13:17:27.279: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.008729332s
Jun 23 13:17:27.279: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-hmlq' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:17 +0000 UTC  }]
Jun 23 13:17:29.280: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.009033125s
Jun 23 13:17:29.280: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 13:17:29.280: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] running a successful command
  test/e2e/kubectl/kubectl.go:542
Jun 23 13:17:29.280: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-9581 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-2 --restart=Never --pod-running-timeout=2m0s success -- /bin/sh -c exit 0'
... skipping 24 lines ...
  test/e2e/kubectl/kubectl.go:407
    should return command exit codes
    test/e2e/kubectl/kubectl.go:527
      running a successful command
      test/e2e/kubectl/kubectl.go:542
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command","total":-1,"completed":11,"skipped":127,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Probing container
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Probing container
  test/e2e/common/node/container_probe.go:59
[It] should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
  test/e2e/common/node/container_probe.go:623
Jun 23 13:16:10.758: INFO: Waiting up to 5m0s for all pods (need at least 1) in namespace 'container-probe-9605' to be running and ready
Jun 23 13:16:10.784: INFO: The status of Pod probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 13:16:10.784: INFO: 0 / 1 pods in namespace 'container-probe-9605' are running and ready (0 seconds elapsed)
Jun 23 13:16:10.784: INFO: expected 0 pod replicas in namespace 'container-probe-9605', 0 are Running and Ready.
Jun 23 13:16:10.784: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 13:16:10.784: INFO: probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d  nodes-us-central1-a-hmlq  Pending         [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC  }]
Jun 23 13:16:10.785: INFO: 
Jun 23 13:16:12.815: INFO: The status of Pod probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 13:16:12.815: INFO: 0 / 1 pods in namespace 'container-probe-9605' are running and ready (2 seconds elapsed)
Jun 23 13:16:12.815: INFO: expected 0 pod replicas in namespace 'container-probe-9605', 0 are Running and Ready.
Jun 23 13:16:12.815: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 13:16:12.815: INFO: probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d  nodes-us-central1-a-hmlq  Pending         [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC  }]
Jun 23 13:16:12.815: INFO: 
Jun 23 13:16:14.796: INFO: The status of Pod probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 13:16:14.796: INFO: 0 / 1 pods in namespace 'container-probe-9605' are running and ready (4 seconds elapsed)
Jun 23 13:16:14.796: INFO: expected 0 pod replicas in namespace 'container-probe-9605', 0 are Running and Ready.
Jun 23 13:16:14.796: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 13:16:14.796: INFO: probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d  nodes-us-central1-a-hmlq  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC ContainersNotReady containers with unready status: [probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC ContainersNotReady containers with unready status: [probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC  }]
Jun 23 13:16:14.796: INFO: 
Jun 23 13:16:16.795: INFO: The status of Pod probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 13:16:16.795: INFO: 0 / 1 pods in namespace 'container-probe-9605' are running and ready (6 seconds elapsed)
Jun 23 13:16:16.795: INFO: expected 0 pod replicas in namespace 'container-probe-9605', 0 are Running and Ready.
Jun 23 13:16:16.795: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 13:16:16.795: INFO: probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d  nodes-us-central1-a-hmlq  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC ContainersNotReady containers with unready status: [probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC ContainersNotReady containers with unready status: [probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC  }]
Jun 23 13:16:16.795: INFO: 
Jun 23 13:16:18.810: INFO: The status of Pod probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 13:16:18.810: INFO: 0 / 1 pods in namespace 'container-probe-9605' are running and ready (8 seconds elapsed)
Jun 23 13:16:18.810: INFO: expected 0 pod replicas in namespace 'container-probe-9605', 0 are Running and Ready.
Jun 23 13:16:18.810: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 13:16:18.810: INFO: probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d  nodes-us-central1-a-hmlq  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC ContainersNotReady containers with unready status: [probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC ContainersNotReady containers with unready status: [probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC  }]
Jun 23 13:16:18.810: INFO: 
Jun 23 13:16:20.839: INFO: The status of Pod probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 13:16:20.839: INFO: 0 / 1 pods in namespace 'container-probe-9605' are running and ready (10 seconds elapsed)
Jun 23 13:16:20.839: INFO: expected 0 pod replicas in namespace 'container-probe-9605', 0 are Running and Ready.
Jun 23 13:16:20.839: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 13:16:20.839: INFO: probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d  nodes-us-central1-a-hmlq  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC ContainersNotReady containers with unready status: [probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC ContainersNotReady containers with unready status: [probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:10 +0000 UTC  }]
Jun 23 13:16:20.839: INFO: 
Jun 23 13:16:22.800: INFO: 1 / 1 pods in namespace 'container-probe-9605' are running and ready (12 seconds elapsed)
... skipping 30 lines ...
      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:34.852: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 91 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap configmap-709/configmap-test-d5d4e19e-5ca7-4259-9418-e4aa19fb3630
STEP: Creating a pod to test consume configMaps
Jun 23 13:17:29.088: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0d1734a-d0ca-42b0-bfb0-393a7035f6d5" in namespace "configmap-709" to be "Succeeded or Failed"
Jun 23 13:17:29.096: INFO: Pod "pod-configmaps-d0d1734a-d0ca-42b0-bfb0-393a7035f6d5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.668571ms
Jun 23 13:17:31.100: INFO: Pod "pod-configmaps-d0d1734a-d0ca-42b0-bfb0-393a7035f6d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012393754s
Jun 23 13:17:33.103: INFO: Pod "pod-configmaps-d0d1734a-d0ca-42b0-bfb0-393a7035f6d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015108042s
Jun 23 13:17:35.101: INFO: Pod "pod-configmaps-d0d1734a-d0ca-42b0-bfb0-393a7035f6d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012604081s
STEP: Saw pod success
Jun 23 13:17:35.101: INFO: Pod "pod-configmaps-d0d1734a-d0ca-42b0-bfb0-393a7035f6d5" satisfied condition "Succeeded or Failed"
Jun 23 13:17:35.104: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-configmaps-d0d1734a-d0ca-42b0-bfb0-393a7035f6d5 container env-test: <nil>
STEP: delete the pod
Jun 23 13:17:35.118: INFO: Waiting for pod pod-configmaps-d0d1734a-d0ca-42b0-bfb0-393a7035f6d5 to disappear
Jun 23 13:17:35.122: INFO: Pod pod-configmaps-d0d1734a-d0ca-42b0-bfb0-393a7035f6d5 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.096 seconds]
[sig-node] ConfigMap
test/e2e/common/node/framework.go:23
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:35.144: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/framework/framework.go:187

... skipping 2 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Only supported for providers [vsphere] (not gce)

      test/e2e/storage/drivers/in_tree.go:1439
------------------------------
... skipping 140 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:240
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":10,"skipped":118,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:17:32.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  test/e2e/node/security_context.go:185
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 23 13:17:32.722: INFO: Waiting up to 5m0s for pod "security-context-821f4368-56ca-486d-821b-3e48fcba9ff4" in namespace "security-context-2073" to be "Succeeded or Failed"
Jun 23 13:17:32.726: INFO: Pod "security-context-821f4368-56ca-486d-821b-3e48fcba9ff4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.874048ms
Jun 23 13:17:34.733: INFO: Pod "security-context-821f4368-56ca-486d-821b-3e48fcba9ff4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010126283s
Jun 23 13:17:36.732: INFO: Pod "security-context-821f4368-56ca-486d-821b-3e48fcba9ff4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009423433s
Jun 23 13:17:38.736: INFO: Pod "security-context-821f4368-56ca-486d-821b-3e48fcba9ff4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013055973s
STEP: Saw pod success
Jun 23 13:17:38.736: INFO: Pod "security-context-821f4368-56ca-486d-821b-3e48fcba9ff4" satisfied condition "Succeeded or Failed"
Jun 23 13:17:38.740: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod security-context-821f4368-56ca-486d-821b-3e48fcba9ff4 container test-container: <nil>
STEP: delete the pod
Jun 23 13:17:38.759: INFO: Waiting for pod security-context-821f4368-56ca-486d-821b-3e48fcba9ff4 to disappear
Jun 23 13:17:38.765: INFO: Pod security-context-821f4368-56ca-486d-821b-3e48fcba9ff4 no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.114 seconds]
[sig-node] Security Context
test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  test/e2e/node/security_context.go:185
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":7,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:38.797: INFO: Only supported for providers [azure] (not gce)
... skipping 160 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      test/e2e/storage/testsuites/ephemeral.go:196
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":7,"skipped":59,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:38.858: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 87 lines ...
• [SLOW TEST:26.460 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":7,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:40.537: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:187

... skipping 161 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 120 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  test/e2e/common/node/security_context.go:219
Jun 23 13:17:34.912: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-de0c157b-9cbb-43c0-829a-23c858b579f3" in namespace "security-context-test-3440" to be "Succeeded or Failed"
Jun 23 13:17:34.920: INFO: Pod "busybox-readonly-true-de0c157b-9cbb-43c0-829a-23c858b579f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217765ms
Jun 23 13:17:36.925: INFO: Pod "busybox-readonly-true-de0c157b-9cbb-43c0-829a-23c858b579f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013139579s
Jun 23 13:17:38.924: INFO: Pod "busybox-readonly-true-de0c157b-9cbb-43c0-829a-23c858b579f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012271094s
Jun 23 13:17:40.928: INFO: Pod "busybox-readonly-true-de0c157b-9cbb-43c0-829a-23c858b579f3": Phase="Failed", Reason="", readiness=false. Elapsed: 6.015554555s
Jun 23 13:17:40.928: INFO: Pod "busybox-readonly-true-de0c157b-9cbb-43c0-829a-23c858b579f3" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 13:17:40.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3440" for this suite.


... skipping 2 lines ...
test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  test/e2e/common/node/security_context.go:173
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    test/e2e/common/node/security_context.go:219
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":12,"skipped":135,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:40.972: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":8,"skipped":108,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:41.012: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/framework/framework.go:187

... skipping 71 lines ...
test/e2e/common/node/framework.go:23
  when scheduling a read only busybox container
  test/e2e/common/node/kubelet.go:190
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:43.063: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 314 lines ...
• [SLOW TEST:23.256 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  test/e2e/network/service.go:1207
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":8,"skipped":101,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:48.831: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 147 lines ...
STEP: Destroying namespace "apply-6908" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  test/e2e/apimachinery/apply.go:59

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for subresources","total":-1,"completed":9,"skipped":123,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:49.134: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 114 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 198 lines ...
• [SLOW TEST:43.208 seconds]
[sig-network] Conntrack
test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when initial unready endpoints get ready
  test/e2e/network/conntrack.go:295
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready","total":-1,"completed":5,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:52.092: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/framework/framework.go:187

... skipping 164 lines ...
test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  test/e2e/storage/csi_mock_volume.go:1334
    CSIStorageCapacity disabled
    test/e2e/storage/csi_mock_volume.go:1377
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":9,"skipped":50,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for API chunking
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 83 lines ...
• [SLOW TEST:21.164 seconds]
[sig-api-machinery] Servers with support for API chunking
test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  test/e2e/apimachinery/chunking.go:79
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":10,"skipped":124,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:55.072: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:187

... skipping 47 lines ...
Jun 23 13:17:38.863: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-6182 create -f -'
Jun 23 13:17:40.296: INFO: stderr: ""
Jun 23 13:17:40.296: INFO: stdout: "pod/httpd created\n"
Jun 23 13:17:40.297: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 13:17:40.297: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-6182" to be "running and ready"
Jun 23 13:17:40.323: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.563876ms
Jun 23 13:17:40.323: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending'
Jun 23 13:17:42.327: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030688147s
Jun 23 13:17:42.327: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-hmlq' to be 'Running' but was 'Pending'
Jun 23 13:17:44.327: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030861229s
Jun 23 13:17:44.328: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-hmlq' to be 'Running' but was 'Pending'
Jun 23 13:17:46.333: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.036764367s
Jun 23 13:17:46.333: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-hmlq' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC  }]
Jun 23 13:17:48.331: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.034358745s
Jun 23 13:17:48.331: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-hmlq' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC  }]
Jun 23 13:17:50.329: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.031956257s
Jun 23 13:17:50.329: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-hmlq' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC  }]
Jun 23 13:17:52.327: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.030667353s
Jun 23 13:17:52.327: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-hmlq' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC  }]
Jun 23 13:17:54.337: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.040747178s
Jun 23 13:17:54.337: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-hmlq' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:17:40 +0000 UTC  }]
Jun 23 13:17:56.329: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 16.03261137s
Jun 23 13:17:56.329: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 13:17:56.329: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] execing into a container with a failing command
  test/e2e/kubectl/kubectl.go:533
Jun 23 13:17:56.329: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-6182 exec httpd --pod-running-timeout=2m0s -- /bin/sh -c exit 42'
... skipping 23 lines ...
  test/e2e/kubectl/kubectl.go:407
    should return command exit codes
    test/e2e/kubectl/kubectl.go:527
      execing into a container with a failing command
      test/e2e/kubectl/kubectl.go:533
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command","total":-1,"completed":8,"skipped":83,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:17:56.875: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 53 lines ...
Jun 23 13:16:50.463: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-3766 create -f -'
Jun 23 13:16:51.443: INFO: stderr: ""
Jun 23 13:16:51.443: INFO: stdout: "pod/httpd created\n"
Jun 23 13:16:51.443: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 13:16:51.443: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-3766" to be "running and ready"
Jun 23 13:16:51.476: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 33.268598ms
Jun 23 13:16:51.476: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-pp7m' to be 'Running' but was 'Pending'
Jun 23 13:16:53.480: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037573288s
Jun 23 13:16:53.481: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-pp7m' to be 'Running' but was 'Pending'
Jun 23 13:16:55.487: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043894958s
Jun 23 13:16:55.487: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-pp7m' to be 'Running' but was 'Pending'
Jun 23 13:16:57.482: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039523516s
Jun 23 13:16:57.482: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-pp7m' to be 'Running' but was 'Pending'
Jun 23 13:16:59.481: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0380416s
Jun 23 13:16:59.481: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-pp7m' to be 'Running' but was 'Pending'
Jun 23 13:17:01.480: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.037642934s
Jun 23 13:17:01.481: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-pp7m' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC  }]
Jun 23 13:17:03.481: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.038439319s
Jun 23 13:17:03.481: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-pp7m' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC  }]
Jun 23 13:17:05.486: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.042835505s
Jun 23 13:17:05.486: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-pp7m' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 13:16:51 +0000 UTC  }]
Jun 23 13:17:07.488: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 16.044690067s
Jun 23 13:17:07.488: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 13:17:07.488: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support inline execution and attach
  test/e2e/kubectl/kubectl.go:591
STEP: executing a command with run and attach with stdin
... skipping 45 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:407
    should support inline execution and attach
    test/e2e/kubectl/kubectl.go:591
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":7,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:02.321: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 146 lines ...
test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  test/e2e/storage/csi_mock_volume.go:1334
    CSIStorageCapacity unused
    test/e2e/storage/csi_mock_volume.go:1377
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":3,"skipped":16,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:04.296: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 35 lines ...
      test/e2e/storage/testsuites/subpath.go:196

      Driver hostPath doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":8,"skipped":89,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:17:22.412: INFO: >>> kubeConfig: /root/.kube/config
... skipping 86 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      test/e2e/storage/testsuites/volumemode.go:354
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":89,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:05.284: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 119 lines ...
Jun 23 13:16:58.026: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 13:17:00.027: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015477971s
Jun 23 13:17:00.027: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 13:17:02.024: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 10.012488972s
Jun 23 13:17:02.024: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 23 13:17:02.024: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 23 13:17:02.024: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=services-8190 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.242.133:80 && echo service-down-failed'
Jun 23 13:17:04.235: INFO: rc: 28
Jun 23 13:17:04.235: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.64.242.133:80 && echo service-down-failed" in pod services-8190/verify-service-down-host-exec-pod: error running /logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=services-8190 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.242.133:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.64.242.133:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8190
STEP: verifying service up-down-2 is still up
Jun 23 13:17:04.248: INFO: Creating new host exec pod
Jun 23 13:17:04.257: INFO: Waiting up to 5m0s for pod "verify-service-up-host-exec-pod" in namespace "services-8190" to be "running and ready"
... skipping 114 lines ...
• [SLOW TEST:127.386 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to up and down services
  test/e2e/network/service.go:1045
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":5,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:18:04.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jun 23 13:18:04.377: INFO: Waiting up to 5m0s for pod "downward-api-2a9a4256-356c-4560-ab73-df2dc3afdd00" in namespace "downward-api-93" to be "Succeeded or Failed"
Jun 23 13:18:04.382: INFO: Pod "downward-api-2a9a4256-356c-4560-ab73-df2dc3afdd00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.640452ms
Jun 23 13:18:06.386: INFO: Pod "downward-api-2a9a4256-356c-4560-ab73-df2dc3afdd00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008929755s
Jun 23 13:18:08.387: INFO: Pod "downward-api-2a9a4256-356c-4560-ab73-df2dc3afdd00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00921961s
Jun 23 13:18:10.393: INFO: Pod "downward-api-2a9a4256-356c-4560-ab73-df2dc3afdd00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015958045s
STEP: Saw pod success
Jun 23 13:18:10.393: INFO: Pod "downward-api-2a9a4256-356c-4560-ab73-df2dc3afdd00" satisfied condition "Succeeded or Failed"
Jun 23 13:18:10.407: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod downward-api-2a9a4256-356c-4560-ab73-df2dc3afdd00 container dapi-container: <nil>
STEP: delete the pod
Jun 23 13:18:10.451: INFO: Waiting for pod downward-api-2a9a4256-356c-4560-ab73-df2dc3afdd00 to disappear
Jun 23 13:18:10.458: INFO: Pod downward-api-2a9a4256-356c-4560-ab73-df2dc3afdd00 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.133 seconds]
[sig-node] Downward API
test/e2e/common/node/framework.go:23
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:10.498: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver hostPath doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 181 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:256
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:257
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 13 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:18:11.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5539" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":7,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:11.893: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/framework/framework.go:187

... skipping 55 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      test/e2e/storage/drivers/in_tree.go:263
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":7,"skipped":27,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:16:45.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
• [SLOW TEST:86.688 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:12.585: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 75 lines ...
Jun 23 13:17:53.249: INFO: PersistentVolumeClaim pvc-94lgg found but phase is Pending instead of Bound.
Jun 23 13:17:55.253: INFO: PersistentVolumeClaim pvc-94lgg found and phase=Bound (12.033645582s)
Jun 23 13:17:55.253: INFO: Waiting up to 3m0s for PersistentVolume local-dvvw8 to have phase Bound
Jun 23 13:17:55.263: INFO: PersistentVolume local-dvvw8 found and phase=Bound (9.018309ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-j7kc
STEP: Creating a pod to test subpath
Jun 23 13:17:55.281: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j7kc" in namespace "provisioning-3286" to be "Succeeded or Failed"
Jun 23 13:17:55.290: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.619359ms
Jun 23 13:17:57.305: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023641144s
Jun 23 13:17:59.295: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013667802s
Jun 23 13:18:01.295: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013896569s
Jun 23 13:18:03.296: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.014836414s
STEP: Saw pod success
Jun 23 13:18:03.296: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc" satisfied condition "Succeeded or Failed"
Jun 23 13:18:03.301: INFO: Trying to get logs from node nodes-us-central1-a-g3vq pod pod-subpath-test-preprovisionedpv-j7kc container test-container-subpath-preprovisionedpv-j7kc: <nil>
STEP: delete the pod
Jun 23 13:18:03.332: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j7kc to disappear
Jun 23 13:18:03.336: INFO: Pod pod-subpath-test-preprovisionedpv-j7kc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j7kc
Jun 23 13:18:03.336: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j7kc" in namespace "provisioning-3286"
STEP: Creating pod pod-subpath-test-preprovisionedpv-j7kc
STEP: Creating a pod to test subpath
Jun 23 13:18:03.347: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j7kc" in namespace "provisioning-3286" to be "Succeeded or Failed"
Jun 23 13:18:03.353: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.583363ms
Jun 23 13:18:05.361: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013881974s
Jun 23 13:18:07.358: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010444767s
Jun 23 13:18:09.357: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009639078s
Jun 23 13:18:11.358: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010960595s
Jun 23 13:18:13.358: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.010465045s
STEP: Saw pod success
Jun 23 13:18:13.358: INFO: Pod "pod-subpath-test-preprovisionedpv-j7kc" satisfied condition "Succeeded or Failed"
Jun 23 13:18:13.361: INFO: Trying to get logs from node nodes-us-central1-a-g3vq pod pod-subpath-test-preprovisionedpv-j7kc container test-container-subpath-preprovisionedpv-j7kc: <nil>
STEP: delete the pod
Jun 23 13:18:13.392: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j7kc to disappear
Jun 23 13:18:13.395: INFO: Pod pod-subpath-test-preprovisionedpv-j7kc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j7kc
Jun 23 13:18:13.395: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j7kc" in namespace "provisioning-3286"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      test/e2e/storage/testsuites/subpath.go:397
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":11,"skipped":119,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:13.585: INFO: Only supported for providers [azure] (not gce)
... skipping 47 lines ...
Jun 23 13:18:07.176: INFO: PersistentVolumeClaim pvc-q6n9m found but phase is Pending instead of Bound.
Jun 23 13:18:09.180: INFO: PersistentVolumeClaim pvc-q6n9m found and phase=Bound (10.031469702s)
Jun 23 13:18:09.180: INFO: Waiting up to 3m0s for PersistentVolume local-mbwzz to have phase Bound
Jun 23 13:18:09.183: INFO: PersistentVolume local-mbwzz found and phase=Bound (2.754166ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4r6c
STEP: Creating a pod to test subpath
Jun 23 13:18:09.194: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4r6c" in namespace "provisioning-6569" to be "Succeeded or Failed"
Jun 23 13:18:09.196: INFO: Pod "pod-subpath-test-preprovisionedpv-4r6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.66995ms
Jun 23 13:18:11.201: INFO: Pod "pod-subpath-test-preprovisionedpv-4r6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007100893s
Jun 23 13:18:13.202: INFO: Pod "pod-subpath-test-preprovisionedpv-4r6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008418322s
Jun 23 13:18:15.207: INFO: Pod "pod-subpath-test-preprovisionedpv-4r6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013656821s
STEP: Saw pod success
Jun 23 13:18:15.208: INFO: Pod "pod-subpath-test-preprovisionedpv-4r6c" satisfied condition "Succeeded or Failed"
Jun 23 13:18:15.218: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-subpath-test-preprovisionedpv-4r6c container test-container-subpath-preprovisionedpv-4r6c: <nil>
STEP: delete the pod
Jun 23 13:18:15.258: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4r6c to disappear
Jun 23 13:18:15.262: INFO: Pod pod-subpath-test-preprovisionedpv-4r6c no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4r6c
Jun 23 13:18:15.262: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4r6c" in namespace "provisioning-6569"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:221
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":87,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:15.523: INFO: Driver local doesn't support ext4 -- skipping
... skipping 38 lines ...
• [SLOW TEST:5.175 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:15.782: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 38 lines ...
• [SLOW TEST:56.106 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should not create pods when created in suspend state
  test/e2e/apps/job.go:103
------------------------------
{"msg":"PASSED [sig-apps] Job should not create pods when created in suspend state","total":-1,"completed":5,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:18.042: INFO: Only supported for providers [azure] (not gce)
... skipping 280 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 150 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should store data
      test/e2e/storage/testsuites/volumes.go:161
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":9,"skipped":114,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 63 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:18:22.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8629" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":10,"skipped":116,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:22.645: INFO: Only supported for providers [azure] (not gce)
... skipping 166 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":6,"skipped":45,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:26.421: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 46 lines ...
• [SLOW TEST:76.345 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should be restarted with a failing exec liveness probe that took longer than the timeout
  test/e2e/common/node/container_probe.go:261
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":10,"skipped":65,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:26.555: INFO: Only supported for providers [vsphere] (not gce)
... skipping 117 lines ...
[It] should support file as subpath [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:232
Jun 23 13:17:55.116: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 13:17:55.121: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-c4zb
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 13:17:55.130: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-c4zb" in namespace "provisioning-6308" to be "Succeeded or Failed"
Jun 23 13:17:55.138: INFO: Pod "pod-subpath-test-inlinevolume-c4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.683165ms
Jun 23 13:17:57.147: INFO: Pod "pod-subpath-test-inlinevolume-c4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01706593s
Jun 23 13:17:59.152: INFO: Pod "pod-subpath-test-inlinevolume-c4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02149319s
Jun 23 13:18:01.143: INFO: Pod "pod-subpath-test-inlinevolume-c4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012587375s
Jun 23 13:18:03.143: INFO: Pod "pod-subpath-test-inlinevolume-c4zb": Phase="Running", Reason="", readiness=true. Elapsed: 8.012675931s
Jun 23 13:18:05.142: INFO: Pod "pod-subpath-test-inlinevolume-c4zb": Phase="Running", Reason="", readiness=true. Elapsed: 10.011953143s
... skipping 6 lines ...
Jun 23 13:18:19.144: INFO: Pod "pod-subpath-test-inlinevolume-c4zb": Phase="Running", Reason="", readiness=true. Elapsed: 24.013964475s
Jun 23 13:18:21.143: INFO: Pod "pod-subpath-test-inlinevolume-c4zb": Phase="Running", Reason="", readiness=true. Elapsed: 26.01230271s
Jun 23 13:18:23.143: INFO: Pod "pod-subpath-test-inlinevolume-c4zb": Phase="Running", Reason="", readiness=true. Elapsed: 28.012602386s
Jun 23 13:18:25.144: INFO: Pod "pod-subpath-test-inlinevolume-c4zb": Phase="Running", Reason="", readiness=true. Elapsed: 30.014018375s
Jun 23 13:18:27.143: INFO: Pod "pod-subpath-test-inlinevolume-c4zb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.012740861s
STEP: Saw pod success
Jun 23 13:18:27.143: INFO: Pod "pod-subpath-test-inlinevolume-c4zb" satisfied condition "Succeeded or Failed"
Jun 23 13:18:27.148: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-subpath-test-inlinevolume-c4zb container test-container-subpath-inlinevolume-c4zb: <nil>
STEP: delete the pod
Jun 23 13:18:27.168: INFO: Waiting for pod pod-subpath-test-inlinevolume-c4zb to disappear
Jun 23 13:18:27.173: INFO: Pod pod-subpath-test-inlinevolume-c4zb no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-c4zb
Jun 23 13:18:27.173: INFO: Deleting pod "pod-subpath-test-inlinevolume-c4zb" in namespace "provisioning-6308"
... skipping 12 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:232
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":11,"skipped":127,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:27.230: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 102 lines ...
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test override command
Jun 23 13:18:13.628: INFO: Waiting up to 5m0s for pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318" in namespace "containers-9045" to be "Succeeded or Failed"
Jun 23 13:18:13.632: INFO: Pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309305ms
Jun 23 13:18:15.640: INFO: Pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011521437s
Jun 23 13:18:17.643: INFO: Pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01450715s
Jun 23 13:18:19.637: INFO: Pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009127719s
Jun 23 13:18:21.636: INFO: Pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007594297s
Jun 23 13:18:23.637: INFO: Pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318": Phase="Pending", Reason="", readiness=false. Elapsed: 10.00914557s
Jun 23 13:18:25.638: INFO: Pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318": Phase="Pending", Reason="", readiness=false. Elapsed: 12.010376184s
Jun 23 13:18:27.642: INFO: Pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318": Phase="Pending", Reason="", readiness=false. Elapsed: 14.013849954s
Jun 23 13:18:29.638: INFO: Pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318": Phase="Pending", Reason="", readiness=false. Elapsed: 16.009888653s
Jun 23 13:18:31.638: INFO: Pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.010230109s
STEP: Saw pod success
Jun 23 13:18:31.638: INFO: Pod "client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318" satisfied condition "Succeeded or Failed"
Jun 23 13:18:31.642: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 13:18:31.676: INFO: Waiting for pod client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318 to disappear
Jun 23 13:18:31.679: INFO: Pod client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318 no longer exists
[AfterEach] [sig-node] Containers
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:18.105 seconds]
[sig-node] Containers
test/e2e/common/node/framework.go:23
  should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":124,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 30 lines ...
test/e2e/apps/framework.go:23
  should update/patch PodDisruptionBudget status [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":10,"skipped":106,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:31.769: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 171 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":99,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 31 lines ...
Jun 23 13:18:22.541: INFO: PersistentVolumeClaim pvc-cntkt found but phase is Pending instead of Bound.
Jun 23 13:18:24.547: INFO: PersistentVolumeClaim pvc-cntkt found and phase=Bound (8.022528103s)
Jun 23 13:18:24.547: INFO: Waiting up to 3m0s for PersistentVolume local-c2pwd to have phase Bound
Jun 23 13:18:24.550: INFO: PersistentVolume local-c2pwd found and phase=Bound (3.577542ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2zfq
STEP: Creating a pod to test subpath
Jun 23 13:18:24.564: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2zfq" in namespace "provisioning-3880" to be "Succeeded or Failed"
Jun 23 13:18:24.569: INFO: Pod "pod-subpath-test-preprovisionedpv-2zfq": Phase="Pending", Reason="", readiness=false. Elapsed: 5.073263ms
Jun 23 13:18:26.587: INFO: Pod "pod-subpath-test-preprovisionedpv-2zfq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023747123s
Jun 23 13:18:28.574: INFO: Pod "pod-subpath-test-preprovisionedpv-2zfq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010077741s
Jun 23 13:18:30.579: INFO: Pod "pod-subpath-test-preprovisionedpv-2zfq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015270897s
Jun 23 13:18:32.586: INFO: Pod "pod-subpath-test-preprovisionedpv-2zfq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02269198s
Jun 23 13:18:34.574: INFO: Pod "pod-subpath-test-preprovisionedpv-2zfq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.01067323s
Jun 23 13:18:36.576: INFO: Pod "pod-subpath-test-preprovisionedpv-2zfq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.01243591s
STEP: Saw pod success
Jun 23 13:18:36.576: INFO: Pod "pod-subpath-test-preprovisionedpv-2zfq" satisfied condition "Succeeded or Failed"
Jun 23 13:18:36.580: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-subpath-test-preprovisionedpv-2zfq container test-container-subpath-preprovisionedpv-2zfq: <nil>
STEP: delete the pod
Jun 23 13:18:36.596: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2zfq to disappear
Jun 23 13:18:36.600: INFO: Pod pod-subpath-test-preprovisionedpv-2zfq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2zfq
Jun 23 13:18:36.601: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2zfq" in namespace "provisioning-3880"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:367
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":8,"skipped":37,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:20.217 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: enough pods, absolute => should allow an eviction
  test/e2e/apps/disruption.go:289
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":6,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 206 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] provisioning
    test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      test/e2e/storage/testsuites/provisioning.go:428
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":7,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:38.589: INFO: Only supported for providers [openstack] (not gce)
... skipping 41 lines ...
  test/e2e/common/node/runtime.go:43
    when running a container with a new image
    test/e2e/common/node/runtime.go:259
      should be able to pull from private registry with secret [NodeConformance]
      test/e2e/common/node/runtime.go:386
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":11,"skipped":80,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
... skipping 119 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      test/e2e/storage/testsuites/ephemeral.go:315
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":10,"skipped":143,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 44 lines ...
STEP: Destroying namespace "services-3704" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":11,"skipped":150,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:40.277: INFO: Only supported for providers [azure] (not gce)
... skipping 57 lines ...
Jun 23 13:18:34.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f8b6c9658\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 23 13:18:36.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f8b6c9658\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 23 13:18:38.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 13, 18, 32, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f8b6c9658\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 23 13:18:41.640: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:647
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jun 23 13:18:41.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-864" for this suite.
... skipping 2 lines ...
  test/e2e/apimachinery/webhook.go:104


• [SLOW TEST:9.973 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":35,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:18:30.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
• [SLOW TEST:11.279 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":9,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 6 lines ...
[It] should support readOnly directory specified in the volumeMount
  test/e2e/storage/testsuites/subpath.go:367
Jun 23 13:18:31.939: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 23 13:18:31.939: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-6gnj
STEP: Creating a pod to test subpath
Jun 23 13:18:31.952: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-6gnj" in namespace "provisioning-2804" to be "Succeeded or Failed"
Jun 23 13:18:31.961: INFO: Pod "pod-subpath-test-inlinevolume-6gnj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.656699ms
Jun 23 13:18:33.972: INFO: Pod "pod-subpath-test-inlinevolume-6gnj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019814667s
Jun 23 13:18:35.967: INFO: Pod "pod-subpath-test-inlinevolume-6gnj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014255906s
Jun 23 13:18:37.967: INFO: Pod "pod-subpath-test-inlinevolume-6gnj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014817813s
Jun 23 13:18:39.969: INFO: Pod "pod-subpath-test-inlinevolume-6gnj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016326908s
Jun 23 13:18:41.972: INFO: Pod "pod-subpath-test-inlinevolume-6gnj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020085441s
STEP: Saw pod success
Jun 23 13:18:41.973: INFO: Pod "pod-subpath-test-inlinevolume-6gnj" satisfied condition "Succeeded or Failed"
Jun 23 13:18:41.978: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-subpath-test-inlinevolume-6gnj container test-container-subpath-inlinevolume-6gnj: <nil>
STEP: delete the pod
Jun 23 13:18:42.006: INFO: Waiting for pod pod-subpath-test-inlinevolume-6gnj to disappear
Jun 23 13:18:42.019: INFO: Pod pod-subpath-test-inlinevolume-6gnj no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-6gnj
Jun 23 13:18:42.019: INFO: Deleting pod "pod-subpath-test-inlinevolume-6gnj" in namespace "provisioning-2804"
... skipping 12 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:367
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":11,"skipped":121,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:42.078: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 90 lines ...
• [SLOW TEST:61.105 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  test/e2e/apps/cronjob.go:241
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":13,"skipped":146,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:42.146: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 14 lines ...
      Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":23,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:17:15.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 36 lines ...
Jun 23 13:17:15.314: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1737
Jun 23 13:17:15.323: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1737
Jun 23 13:17:15.330: INFO: creating *v1.StatefulSet: csi-mock-volumes-1737-4675/csi-mockplugin
Jun 23 13:17:15.338: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1737
Jun 23 13:17:15.344: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1737"
Jun 23 13:17:15.353: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1737 to register on node nodes-us-central1-a-pp7m
I0623 13:17:24.390289    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0623 13:17:24.398883    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1737","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0623 13:17:24.403298    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0623 13:17:24.409043    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0623 13:17:24.498378    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1737","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0623 13:17:24.596708    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-1737"},"Error":"","FullError":null}
STEP: Creating pod
Jun 23 13:17:31.633: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 23 13:17:31.642: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-ww7tg] to have phase Bound
I0623 13:17:31.651640    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d39f1824-e2f7-499d-ae1d-39f453f57faa","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
Jun 23 13:17:31.654: INFO: PersistentVolumeClaim pvc-ww7tg found but phase is Pending instead of Bound.
I0623 13:17:32.659714    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d39f1824-e2f7-499d-ae1d-39f453f57faa","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d39f1824-e2f7-499d-ae1d-39f453f57faa"}}},"Error":"","FullError":null}
Jun 23 13:17:33.669: INFO: PersistentVolumeClaim pvc-ww7tg found and phase=Bound (2.02700765s)
Jun 23 13:17:33.706: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-b858q" in namespace "csi-mock-volumes-1737" to be "running"
Jun 23 13:17:33.731: INFO: Pod "pvc-volume-tester-b858q": Phase="Pending", Reason="", readiness=false. Elapsed: 24.618841ms
I0623 13:17:35.190164    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0623 13:17:35.202670    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0623 13:17:35.206377    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 23 13:17:35.210: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:17:35.211: INFO: ExecWithOptions: Clientset creation
Jun 23 13:17:35.211: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/csi-mock-volumes-1737-4675/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-1737%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-1737%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
I0623 13:17:35.304955    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-1737/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d39f1824-e2f7-499d-ae1d-39f453f57faa","storage.kubernetes.io/csiProvisionerIdentity":"1655990244413-8081-csi-mock-csi-mock-volumes-1737"}},"Response":{},"Error":"","FullError":null}
I0623 13:17:35.310058    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0623 13:17:35.316039    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0623 13:17:35.321568    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 23 13:17:35.327: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:17:35.328: INFO: ExecWithOptions: Clientset creation
Jun 23 13:17:35.328: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/csi-mock-volumes-1737-4675/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F88a97b56-22e7-4510-b29d-f4f990f31b18%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d39f1824-e2f7-499d-ae1d-39f453f57faa%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F88a97b56-22e7-4510-b29d-f4f990f31b18%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d39f1824-e2f7-499d-ae1d-39f453f57faa%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 23 13:17:35.430: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:17:35.431: INFO: ExecWithOptions: Clientset creation
Jun 23 13:17:35.431: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/csi-mock-volumes-1737-4675/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F88a97b56-22e7-4510-b29d-f4f990f31b18%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d39f1824-e2f7-499d-ae1d-39f453f57faa%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F88a97b56-22e7-4510-b29d-f4f990f31b18%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d39f1824-e2f7-499d-ae1d-39f453f57faa%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 23 13:17:35.512: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:17:35.513: INFO: ExecWithOptions: Clientset creation
Jun 23 13:17:35.513: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/csi-mock-volumes-1737-4675/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F88a97b56-22e7-4510-b29d-f4f990f31b18%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d39f1824-e2f7-499d-ae1d-39f453f57faa%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0623 13:17:35.628357    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-1737/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","target_path":"/var/lib/kubelet/pods/88a97b56-22e7-4510-b29d-f4f990f31b18/volumes/kubernetes.io~csi/pvc-d39f1824-e2f7-499d-ae1d-39f453f57faa/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d39f1824-e2f7-499d-ae1d-39f453f57faa","storage.kubernetes.io/csiProvisionerIdentity":"1655990244413-8081-csi-mock-csi-mock-volumes-1737"}},"Response":{},"Error":"","FullError":null}
Jun 23 13:17:35.736: INFO: Pod "pvc-volume-tester-b858q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02941118s
I0623 13:17:37.307497    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0623 13:17:37.310786    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/88a97b56-22e7-4510-b29d-f4f990f31b18/volumes/kubernetes.io~csi/pvc-d39f1824-e2f7-499d-ae1d-39f453f57faa/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null}
Jun 23 13:17:37.737: INFO: Pod "pvc-volume-tester-b858q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030196195s
Jun 23 13:17:39.738: INFO: Pod "pvc-volume-tester-b858q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031489663s
Jun 23 13:17:41.743: INFO: Pod "pvc-volume-tester-b858q": Phase="Running", Reason="", readiness=true. Elapsed: 8.036843654s
Jun 23 13:17:41.743: INFO: Pod "pvc-volume-tester-b858q" satisfied condition "running"
Jun 23 13:17:41.743: INFO: Deleting pod "pvc-volume-tester-b858q" in namespace "csi-mock-volumes-1737"
Jun 23 13:17:41.757: INFO: Wait up to 5m0s for pod "pvc-volume-tester-b858q" to be fully deleted
Jun 23 13:17:42.148: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:17:42.149: INFO: ExecWithOptions: Clientset creation
Jun 23 13:17:42.149: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/csi-mock-volumes-1737-4675/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F88a97b56-22e7-4510-b29d-f4f990f31b18%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d39f1824-e2f7-499d-ae1d-39f453f57faa%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0623 13:17:42.235693    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/88a97b56-22e7-4510-b29d-f4f990f31b18/volumes/kubernetes.io~csi/pvc-d39f1824-e2f7-499d-ae1d-39f453f57faa/mount"},"Response":{},"Error":"","FullError":null}
I0623 13:17:42.251834    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0623 13:17:42.255628    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-1737/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount"},"Response":{},"Error":"","FullError":null}
I0623 13:17:43.817710    7034 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Jun 23 13:17:44.773: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-ww7tg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1737", SelfLink:"", UID:"d39f1824-e2f7-499d-ae1d-39f453f57faa", ResourceVersion:"8927", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 13, 17, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 13, 17, 31, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025a66d8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0023c84a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0023c84b0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 23 13:17:44.774: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-ww7tg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1737", SelfLink:"", UID:"d39f1824-e2f7-499d-ae1d-39f453f57faa", ResourceVersion:"8928", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 13, 17, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1737", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1737"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 13, 17, 31, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002648108), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 13, 17, 31, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002648168), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002720040), VolumeMode:(*v1.PersistentVolumeMode)(0xc0027200c0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 23 13:17:44.774: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-ww7tg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1737", SelfLink:"", UID:"d39f1824-e2f7-499d-ae1d-39f453f57faa", ResourceVersion:"8964", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 13, 17, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1737", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1737"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 13, 17, 31, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019f4528), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 13, 17, 32, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019f4558), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d39f1824-e2f7-499d-ae1d-39f453f57faa", StorageClassName:(*string)(0xc001274d50), VolumeMode:(*v1.PersistentVolumeMode)(0xc001274d70), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 23 13:17:44.774: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-ww7tg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1737", SelfLink:"", UID:"d39f1824-e2f7-499d-ae1d-39f453f57faa", ResourceVersion:"8965", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 13, 17, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1737", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1737"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 13, 17, 31, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019f45a0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 13, 17, 32, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019f45d0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 13, 17, 32, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019f4600), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d39f1824-e2f7-499d-ae1d-39f453f57faa", StorageClassName:(*string)(0xc001274dd0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001274e20), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 23 13:17:44.774: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-ww7tg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1737", SelfLink:"", UID:"d39f1824-e2f7-499d-ae1d-39f453f57faa", ResourceVersion:"9660", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 13, 17, 31, 0, time.Local), DeletionTimestamp:time.Date(2022, time.June, 23, 13, 17, 43, 0, time.Local), DeletionGracePeriodSeconds:(*int64)(0xc0003b6b68), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1737", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1737"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 13, 17, 31, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019f4660), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 13, 17, 32, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019f4690), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 13, 17, 32, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019f46c0), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d39f1824-e2f7-499d-ae1d-39f453f57faa", StorageClassName:(*string)(0xc001274eb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001274ed0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
... skipping 71 lines ...
      test/e2e/storage/testsuites/volumelimits.go:249

      Driver local doesn't support GenericEphemeralVolume -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":6,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:42.168: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
... skipping 113 lines ...
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:18:42.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating projection with secret that has name secret-emptykey-test-7433fdc6-68e7-4216-8577-cb63c085bac5
[AfterEach] [sig-node] Secrets
  test/e2e/framework/framework.go:187
Jun 23 13:18:42.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4697" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":12,"skipped":141,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:42.344: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":6,"skipped":66,"failed":0}
[BeforeEach] [sig-node] Mount propagation
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:18:22.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename mount-propagation
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 133 lines ...
Jun 23 13:18:34.795: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:34.795: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/hostexec-nodes-us-central1-a-g3vq-rcddt/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-5976%22%2Fhost%3B+mount+-t+tmpfs+e2e-mount-propagation-host+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-5976%22%2Fhost%3B+echo+host+%3E+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-5976%22%2Fhost%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 23 13:18:34.991: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5976 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:34.992: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:34.993: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:34.993: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:35.091: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Jun 23 13:18:35.095: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5976 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:35.095: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:35.095: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:35.096: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:35.181: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:35.191: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5976 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:35.192: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:35.192: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:35.192: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:35.329: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:35.332: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5976 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:35.332: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:35.333: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:35.333: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:35.468: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:35.472: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5976 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:35.472: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:35.473: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:35.473: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:35.618: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Jun 23 13:18:35.622: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5976 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:35.622: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:35.623: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:35.623: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:35.745: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Jun 23 13:18:35.748: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5976 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:35.749: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:35.750: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:35.750: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:35.849: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Jun 23 13:18:35.855: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5976 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:35.855: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:35.855: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:35.855: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:35.978: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:36.002: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5976 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:36.003: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:36.004: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:36.004: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:36.130: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:36.133: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5976 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:36.133: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:36.135: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:36.135: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:36.295: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Jun 23 13:18:36.299: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5976 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:36.299: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:36.300: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:36.300: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:36.424: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:36.428: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5976 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:36.428: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:36.429: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:36.429: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:36.521: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:36.525: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5976 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:36.525: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:36.526: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:36.526: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:36.615: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Jun 23 13:18:36.621: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5976 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:36.621: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:36.622: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:36.622: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:36.722: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:36.729: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5976 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:36.729: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:36.730: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:36.730: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:36.827: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:36.838: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5976 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:36.838: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:36.839: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:36.840: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:36.998: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:37.007: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5976 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:37.007: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:37.008: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:37.008: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:37.120: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:37.124: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5976 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:37.124: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:37.125: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:37.125: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:37.295: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:37.298: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5976 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:37.298: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:37.299: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:37.299: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:37.452: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Jun 23 13:18:37.456: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5976 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 13:18:37.456: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:37.457: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:37.457: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 13:18:37.609: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jun 23 13:18:37.609: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c pidof kubelet] Namespace:mount-propagation-5976 PodName:hostexec-nodes-us-central1-a-g3vq-rcddt ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jun 23 13:18:37.609: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 13:18:37.610: INFO: ExecWithOptions: Clientset creation
Jun 23 13:18:37.610: INFO: ExecWithOptions: execute(POST https://35.202.140.103/api/v1/namespaces/mount-propagation-5976/pods/hostexec-nodes-us-central1-a-g3vq-rcddt/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=pidof+kubelet&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 23 13:18:37.857: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 4905 -m cat "/var/lib/kubelet/mount-propagation-5976/host/file"] Namespace:mount-propagation-5976 PodName:hostexec-nodes-us-central1-a-g3vq-rcddt ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jun 23 13:18:37.858: INFO: >>> kubeConfig: /root/.kube/config
... skipping 53 lines ...
• [SLOW TEST:19.958 seconds]
[sig-node] Mount propagation
test/e2e/node/framework.go:23
  should propagate mounts within defined scopes
  test/e2e/node/mount_propagation.go:85
------------------------------
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts within defined scopes","total":-1,"completed":7,"skipped":66,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
... skipping 139 lines ...
• [SLOW TEST:26.353 seconds]
[sig-storage] PVC Protection
test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod
  test/e2e/storage/pvc_protection.go:116
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":5,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 70 lines ...
      test/e2e/storage/testsuites/volumes.go:161

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":9,"skipped":34,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 93 lines ...
test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  test/e2e/apimachinery/custom_resource_definition.go:50
    listing custom resource definition objects works  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":10,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:48.314: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:187

... skipping 23 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_downwardapi.go:93
STEP: Creating a pod to test downward API volume plugin
Jun 23 13:18:38.460: INFO: Waiting up to 5m0s for pod "metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac" in namespace "projected-1784" to be "Succeeded or Failed"
Jun 23 13:18:38.465: INFO: Pod "metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac": Phase="Pending", Reason="", readiness=false. Elapsed: 5.455088ms
Jun 23 13:18:40.470: INFO: Pod "metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010495116s
Jun 23 13:18:42.502: INFO: Pod "metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042297497s
Jun 23 13:18:44.473: INFO: Pod "metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012868271s
Jun 23 13:18:46.470: INFO: Pod "metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010189844s
Jun 23 13:18:48.476: INFO: Pod "metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015689424s
STEP: Saw pod success
Jun 23 13:18:48.476: INFO: Pod "metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac" satisfied condition "Succeeded or Failed"
Jun 23 13:18:48.488: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac container client-container: <nil>
STEP: delete the pod
Jun 23 13:18:48.526: INFO: Waiting for pod metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac to disappear
Jun 23 13:18:48.529: INFO: Pod metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:10.117 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_downwardapi.go:93
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Secrets
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-652eb550-8c92-4dd3-96a5-6af9a0ce177e
STEP: Creating a pod to test consume secrets
Jun 23 13:18:38.644: INFO: Waiting up to 5m0s for pod "pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b" in namespace "secrets-8931" to be "Succeeded or Failed"
Jun 23 13:18:38.652: INFO: Pod "pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.695434ms
Jun 23 13:18:40.657: INFO: Pod "pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01267847s
Jun 23 13:18:42.686: INFO: Pod "pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041878613s
Jun 23 13:18:44.658: INFO: Pod "pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013567125s
Jun 23 13:18:46.660: INFO: Pod "pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015752947s
Jun 23 13:18:48.663: INFO: Pod "pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.01867308s
STEP: Saw pod success
Jun 23 13:18:48.663: INFO: Pod "pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b" satisfied condition "Succeeded or Failed"
Jun 23 13:18:48.672: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b container secret-env-test: <nil>
STEP: delete the pod
Jun 23 13:18:48.698: INFO: Waiting for pod pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b to disappear
Jun 23 13:18:48.701: INFO: Pod pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b no longer exists
[AfterEach] [sig-node] Secrets
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:10.111 seconds]
[sig-node] Secrets
test/e2e/common/node/framework.go:23
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:48.731: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 127 lines ...
      Only supported for providers [openstack] (not gce)

      test/e2e/storage/drivers/in_tree.go:1092
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":6,"skipped":82,"failed":0}
[BeforeEach] [sig-node] Pods Extended
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:17:01.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 132 lines ...
test/e2e/node/framework.go:23
  Pod Container Status
  test/e2e/node/pods.go:202
    should never report success for a pending container
    test/e2e/node/pods.go:208
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":7,"skipped":82,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:51.615: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 65 lines ...
• [SLOW TEST:8.194 seconds]
[sig-node] Containers
test/e2e/common/node/framework.go:23
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Containers
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:18:42.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test override arguments
Jun 23 13:18:42.922: INFO: Waiting up to 5m0s for pod "client-containers-77f88a9b-1ee5-483d-9770-173957da4173" in namespace "containers-9237" to be "Succeeded or Failed"
Jun 23 13:18:42.938: INFO: Pod "client-containers-77f88a9b-1ee5-483d-9770-173957da4173": Phase="Pending", Reason="", readiness=false. Elapsed: 16.546088ms
Jun 23 13:18:44.965: INFO: Pod "client-containers-77f88a9b-1ee5-483d-9770-173957da4173": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042866887s
Jun 23 13:18:46.948: INFO: Pod "client-containers-77f88a9b-1ee5-483d-9770-173957da4173": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026190243s
Jun 23 13:18:48.951: INFO: Pod "client-containers-77f88a9b-1ee5-483d-9770-173957da4173": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028938082s
Jun 23 13:18:50.943: INFO: Pod "client-containers-77f88a9b-1ee5-483d-9770-173957da4173": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020940403s
Jun 23 13:18:52.950: INFO: Pod "client-containers-77f88a9b-1ee5-483d-9770-173957da4173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.027859762s
STEP: Saw pod success
Jun 23 13:18:52.950: INFO: Pod "client-containers-77f88a9b-1ee5-483d-9770-173957da4173" satisfied condition "Succeeded or Failed"
Jun 23 13:18:52.955: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod client-containers-77f88a9b-1ee5-483d-9770-173957da4173 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 13:18:53.008: INFO: Waiting for pod client-containers-77f88a9b-1ee5-483d-9770-173957da4173 to disappear
Jun 23 13:18:53.020: INFO: Pod client-containers-77f88a9b-1ee5-483d-9770-173957da4173 no longer exists
[AfterEach] [sig-node] Containers
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:10.324 seconds]
[sig-node] Containers
test/e2e/common/node/framework.go:23
  should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":154,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:53.045: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 91 lines ...
• [SLOW TEST:12.304 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  test/e2e/apps/disruption.go:289
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":7,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:54.590: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
• [SLOW TEST:16.282 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":12,"skipped":161,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:56.618: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/framework/framework.go:187

... skipping 108 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:256
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:257
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":100,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:57.487: INFO: Only supported for providers [azure] (not gce)
... skipping 342 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    should provide basic identity
    test/e2e/apps/statefulset.go:132
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","total":-1,"completed":3,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 11 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:18:59.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-5115" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:59.168: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 29 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:18:59.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5847" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":13,"skipped":163,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:18:59.979: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 50 lines ...
• [SLOW TEST:8.052 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":11,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:01.128: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/framework/framework.go:187

... skipping 90 lines ...
Jun 23 13:18:53.113: INFO: PersistentVolumeClaim pvc-fjmx2 found but phase is Pending instead of Bound.
Jun 23 13:18:55.118: INFO: PersistentVolumeClaim pvc-fjmx2 found and phase=Bound (2.024980488s)
Jun 23 13:18:55.119: INFO: Waiting up to 3m0s for PersistentVolume local-75pcd to have phase Bound
Jun 23 13:18:55.122: INFO: PersistentVolume local-75pcd found and phase=Bound (3.217397ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-8msf
STEP: Creating a pod to test exec-volume-test
Jun 23 13:18:55.134: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-8msf" in namespace "volume-8251" to be "Succeeded or Failed"
Jun 23 13:18:55.144: INFO: Pod "exec-volume-test-preprovisionedpv-8msf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.088528ms
Jun 23 13:18:57.148: INFO: Pod "exec-volume-test-preprovisionedpv-8msf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013764713s
Jun 23 13:18:59.152: INFO: Pod "exec-volume-test-preprovisionedpv-8msf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017458338s
Jun 23 13:19:01.149: INFO: Pod "exec-volume-test-preprovisionedpv-8msf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014391788s
STEP: Saw pod success
Jun 23 13:19:01.149: INFO: Pod "exec-volume-test-preprovisionedpv-8msf" satisfied condition "Succeeded or Failed"
Jun 23 13:19:01.152: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod exec-volume-test-preprovisionedpv-8msf container exec-container-preprovisionedpv-8msf: <nil>
STEP: delete the pod
Jun 23 13:19:01.173: INFO: Waiting for pod exec-volume-test-preprovisionedpv-8msf to disappear
Jun 23 13:19:01.177: INFO: Pod exec-volume-test-preprovisionedpv-8msf no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-8msf
Jun 23 13:19:01.177: INFO: Deleting pod "exec-volume-test-preprovisionedpv-8msf" in namespace "volume-8251"
... skipping 19 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":68,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 36 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":12,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:01.365: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 258 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":13,"skipped":134,"failed":0}
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:18:41.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
• [SLOW TEST:20.350 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for the cluster [Provider:GCE]
  test/e2e/network/dns.go:70
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Provider:GCE]","total":-1,"completed":14,"skipped":134,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:18:54.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test env composition
Jun 23 13:18:54.664: INFO: Waiting up to 5m0s for pod "var-expansion-43c347d6-d91d-4908-8eac-a91abba00248" in namespace "var-expansion-6107" to be "Succeeded or Failed"
Jun 23 13:18:54.667: INFO: Pod "var-expansion-43c347d6-d91d-4908-8eac-a91abba00248": Phase="Pending", Reason="", readiness=false. Elapsed: 3.487818ms
Jun 23 13:18:56.679: INFO: Pod "var-expansion-43c347d6-d91d-4908-8eac-a91abba00248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015211262s
Jun 23 13:18:58.674: INFO: Pod "var-expansion-43c347d6-d91d-4908-8eac-a91abba00248": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009538431s
Jun 23 13:19:00.677: INFO: Pod "var-expansion-43c347d6-d91d-4908-8eac-a91abba00248": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01259944s
Jun 23 13:19:02.674: INFO: Pod "var-expansion-43c347d6-d91d-4908-8eac-a91abba00248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.009866596s
STEP: Saw pod success
Jun 23 13:19:02.674: INFO: Pod "var-expansion-43c347d6-d91d-4908-8eac-a91abba00248" satisfied condition "Succeeded or Failed"
Jun 23 13:19:02.677: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod var-expansion-43c347d6-d91d-4908-8eac-a91abba00248 container dapi-container: <nil>
STEP: delete the pod
Jun 23 13:19:02.697: INFO: Waiting for pod var-expansion-43c347d6-d91d-4908-8eac-a91abba00248 to disappear
Jun 23 13:19:02.699: INFO: Pod var-expansion-43c347d6-d91d-4908-8eac-a91abba00248 no longer exists
[AfterEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.090 seconds]
[sig-node] Variable Expansion
test/e2e/common/node/framework.go:23
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:02.735: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 14 lines ...
  test/e2e/framework/framework.go:187
Jun 23 13:19:03.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-560" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":97,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:03.617: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:187

... skipping 46 lines ...
  test/e2e/storage/subpath.go:40
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating pod pod-subpath-test-projected-qf9d
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 13:18:36.890: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qf9d" in namespace "subpath-9419" to be "Succeeded or Failed"
Jun 23 13:18:36.903: INFO: Pod "pod-subpath-test-projected-qf9d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.739729ms
Jun 23 13:18:38.914: INFO: Pod "pod-subpath-test-projected-qf9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024625428s
Jun 23 13:18:40.909: INFO: Pod "pod-subpath-test-projected-qf9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019046228s
Jun 23 13:18:42.932: INFO: Pod "pod-subpath-test-projected-qf9d": Phase="Running", Reason="", readiness=true. Elapsed: 6.042642855s
Jun 23 13:18:44.923: INFO: Pod "pod-subpath-test-projected-qf9d": Phase="Running", Reason="", readiness=true. Elapsed: 8.033347196s
Jun 23 13:18:46.909: INFO: Pod "pod-subpath-test-projected-qf9d": Phase="Running", Reason="", readiness=true. Elapsed: 10.018922709s
... skipping 4 lines ...
Jun 23 13:18:56.907: INFO: Pod "pod-subpath-test-projected-qf9d": Phase="Running", Reason="", readiness=true. Elapsed: 20.017216759s
Jun 23 13:18:58.911: INFO: Pod "pod-subpath-test-projected-qf9d": Phase="Running", Reason="", readiness=true. Elapsed: 22.021364253s
Jun 23 13:19:00.909: INFO: Pod "pod-subpath-test-projected-qf9d": Phase="Running", Reason="", readiness=true. Elapsed: 24.018999811s
Jun 23 13:19:02.906: INFO: Pod "pod-subpath-test-projected-qf9d": Phase="Running", Reason="", readiness=true. Elapsed: 26.016651056s
Jun 23 13:19:04.907: INFO: Pod "pod-subpath-test-projected-qf9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.017428108s
STEP: Saw pod success
Jun 23 13:19:04.907: INFO: Pod "pod-subpath-test-projected-qf9d" satisfied condition "Succeeded or Failed"
Jun 23 13:19:04.910: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-subpath-test-projected-qf9d container test-container-subpath-projected-qf9d: <nil>
STEP: delete the pod
Jun 23 13:19:04.925: INFO: Waiting for pod pod-subpath-test-projected-qf9d to disappear
Jun 23 13:19:04.941: INFO: Pod pod-subpath-test-projected-qf9d no longer exists
STEP: Deleting pod pod-subpath-test-projected-qf9d
Jun 23 13:19:04.942: INFO: Deleting pod "pod-subpath-test-projected-qf9d" in namespace "subpath-9419"
... skipping 8 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:36
    should support subpaths with projected pod [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]","total":-1,"completed":9,"skipped":44,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:18:59.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test substitution in volume subpath
Jun 23 13:18:59.302: INFO: Waiting up to 5m0s for pod "var-expansion-1db11587-a7d7-4682-97fc-76be7cc14c1c" in namespace "var-expansion-6930" to be "Succeeded or Failed"
Jun 23 13:18:59.329: INFO: Pod "var-expansion-1db11587-a7d7-4682-97fc-76be7cc14c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.670447ms
Jun 23 13:19:01.335: INFO: Pod "var-expansion-1db11587-a7d7-4682-97fc-76be7cc14c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032079026s
Jun 23 13:19:03.334: INFO: Pod "var-expansion-1db11587-a7d7-4682-97fc-76be7cc14c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031519516s
Jun 23 13:19:05.334: INFO: Pod "var-expansion-1db11587-a7d7-4682-97fc-76be7cc14c1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031597459s
STEP: Saw pod success
Jun 23 13:19:05.334: INFO: Pod "var-expansion-1db11587-a7d7-4682-97fc-76be7cc14c1c" satisfied condition "Succeeded or Failed"
Jun 23 13:19:05.339: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod var-expansion-1db11587-a7d7-4682-97fc-76be7cc14c1c container dapi-container: <nil>
STEP: delete the pod
Jun 23 13:19:05.357: INFO: Waiting for pod var-expansion-1db11587-a7d7-4682-97fc-76be7cc14c1c to disappear
Jun 23 13:19:05.366: INFO: Pod var-expansion-1db11587-a7d7-4682-97fc-76be7cc14c1c no longer exists
[AfterEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.206 seconds]
[sig-node] Variable Expansion
test/e2e/common/node/framework.go:23
  should allow substituting values in a volume subpath [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":5,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:05.413: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-file]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:2079
------------------------------
... skipping 64 lines ...
• [SLOW TEST:13.835 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":8,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:05.511: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 195 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_downwardapi.go:108
STEP: Creating a pod to test downward API volume plugin
Jun 23 13:19:02.217: INFO: Waiting up to 5m0s for pod "metadata-volume-9e5d2e60-2d86-4d69-9e40-d59a7578f307" in namespace "projected-6813" to be "Succeeded or Failed"
Jun 23 13:19:02.228: INFO: Pod "metadata-volume-9e5d2e60-2d86-4d69-9e40-d59a7578f307": Phase="Pending", Reason="", readiness=false. Elapsed: 10.891778ms
Jun 23 13:19:04.235: INFO: Pod "metadata-volume-9e5d2e60-2d86-4d69-9e40-d59a7578f307": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017405477s
Jun 23 13:19:06.233: INFO: Pod "metadata-volume-9e5d2e60-2d86-4d69-9e40-d59a7578f307": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015617883s
Jun 23 13:19:08.234: INFO: Pod "metadata-volume-9e5d2e60-2d86-4d69-9e40-d59a7578f307": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016193382s
Jun 23 13:19:10.234: INFO: Pod "metadata-volume-9e5d2e60-2d86-4d69-9e40-d59a7578f307": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016184628s
STEP: Saw pod success
Jun 23 13:19:10.234: INFO: Pod "metadata-volume-9e5d2e60-2d86-4d69-9e40-d59a7578f307" satisfied condition "Succeeded or Failed"
Jun 23 13:19:10.238: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod metadata-volume-9e5d2e60-2d86-4d69-9e40-d59a7578f307 container client-container: <nil>
STEP: delete the pod
Jun 23 13:19:10.266: INFO: Waiting for pod metadata-volume-9e5d2e60-2d86-4d69-9e40-d59a7578f307 to disappear
Jun 23 13:19:10.270: INFO: Pod metadata-volume-9e5d2e60-2d86-4d69-9e40-d59a7578f307 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.143 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_downwardapi.go:108
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":15,"skipped":137,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":109,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:15:53.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 54 lines ...
Jun 23 13:16:22.066: INFO: Pod "pvc-volume-tester-nzn85": Phase="Pending", Reason="", readiness=false. Elapsed: 10.013806245s
Jun 23 13:16:24.065: INFO: Pod "pvc-volume-tester-nzn85": Phase="Pending", Reason="", readiness=false. Elapsed: 12.013518711s
Jun 23 13:16:26.071: INFO: Pod "pvc-volume-tester-nzn85": Phase="Pending", Reason="", readiness=false. Elapsed: 14.019091611s
Jun 23 13:16:28.074: INFO: Pod "pvc-volume-tester-nzn85": Phase="Running", Reason="", readiness=true. Elapsed: 16.022493817s
Jun 23 13:16:28.074: INFO: Pod "pvc-volume-tester-nzn85" satisfied condition "running"
STEP: checking for CSIInlineVolumes feature
Jun 23 13:16:28.147: INFO: Error getting logs for pod inline-volume-74hnl: the server rejected our request for an unknown reason (get pods inline-volume-74hnl)
Jun 23 13:16:28.187: INFO: Deleting pod "inline-volume-74hnl" in namespace "csi-mock-volumes-2557"
Jun 23 13:16:28.205: INFO: Wait up to 5m0s for pod "inline-volume-74hnl" to be fully deleted
STEP: Deleting the previously created pod
Jun 23 13:18:38.222: INFO: Deleting pod "pvc-volume-tester-nzn85" in namespace "csi-mock-volumes-2557"
Jun 23 13:18:38.229: INFO: Wait up to 5m0s for pod "pvc-volume-tester-nzn85" to be fully deleted
STEP: Checking CSI driver logs
Jun 23 13:18:44.262: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-2557
Jun 23 13:18:44.262: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: d6779d08-a6d3-44c0-ab08-112627a4712e
Jun 23 13:18:44.262: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jun 23 13:18:44.262: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Jun 23 13:18:44.262: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-nzn85
Jun 23 13:18:44.262: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"a5de2a25-f2f6-11ec-bbab-f27c23a1f79d","target_path":"/var/lib/kubelet/pods/d6779d08-a6d3-44c0-ab08-112627a4712e/volumes/kubernetes.io~csi/pvc-775c2834-12c5-4aa8-9fc5-b89f060940a0/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-nzn85
Jun 23 13:18:44.262: INFO: Deleting pod "pvc-volume-tester-nzn85" in namespace "csi-mock-volumes-2557"
STEP: Deleting claim pvc-zjxrw
Jun 23 13:18:44.282: INFO: Waiting up to 2m0s for PersistentVolume pvc-775c2834-12c5-4aa8-9fc5-b89f060940a0 to get deleted
Jun 23 13:18:44.288: INFO: PersistentVolume pvc-775c2834-12c5-4aa8-9fc5-b89f060940a0 found and phase=Bound (6.048837ms)
Jun 23 13:18:46.293: INFO: PersistentVolume pvc-775c2834-12c5-4aa8-9fc5-b89f060940a0 found and phase=Released (2.010958836s)
... skipping 48 lines ...
test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  test/e2e/storage/csi_mock_volume.go:467
    should be passed when podInfoOnMount=true
    test/e2e/storage/csi_mock_volume.go:517
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":9,"skipped":109,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:19:05.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun 23 13:19:05.707: INFO: Waiting up to 5m0s for pod "pod-0ad668fe-7d18-48f9-b873-b6628dc62c37" in namespace "emptydir-3840" to be "Succeeded or Failed"
Jun 23 13:19:05.712: INFO: Pod "pod-0ad668fe-7d18-48f9-b873-b6628dc62c37": Phase="Pending", Reason="", readiness=false. Elapsed: 5.31392ms
Jun 23 13:19:07.718: INFO: Pod "pod-0ad668fe-7d18-48f9-b873-b6628dc62c37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011836497s
Jun 23 13:19:09.717: INFO: Pod "pod-0ad668fe-7d18-48f9-b873-b6628dc62c37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01064995s
Jun 23 13:19:11.716: INFO: Pod "pod-0ad668fe-7d18-48f9-b873-b6628dc62c37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009447141s
STEP: Saw pod success
Jun 23 13:19:11.716: INFO: Pod "pod-0ad668fe-7d18-48f9-b873-b6628dc62c37" satisfied condition "Succeeded or Failed"
Jun 23 13:19:11.719: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-0ad668fe-7d18-48f9-b873-b6628dc62c37 container test-container: <nil>
STEP: delete the pod
Jun 23 13:19:11.736: INFO: Waiting for pod pod-0ad668fe-7d18-48f9-b873-b6628dc62c37 to disappear
Jun 23 13:19:11.740: INFO: Pod pod-0ad668fe-7d18-48f9-b873-b6628dc62c37 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.090 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":115,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 49 lines ...
• [SLOW TEST:25.203 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:14.058: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/framework/framework.go:187

... skipping 2 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 109 lines ...
• [SLOW TEST:10.274 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
test/e2e/auth/framework.go:23
  should support building a client with a CSR
  test/e2e/auth/certificates.go:59
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":6,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 125 lines ...
Jun 23 13:19:08.375: INFO: PersistentVolumeClaim pvc-c7qp6 found but phase is Pending instead of Bound.
Jun 23 13:19:10.380: INFO: PersistentVolumeClaim pvc-c7qp6 found and phase=Bound (8.048736511s)
Jun 23 13:19:10.380: INFO: Waiting up to 3m0s for PersistentVolume local-8fkjq to have phase Bound
Jun 23 13:19:10.383: INFO: PersistentVolume local-8fkjq found and phase=Bound (3.358716ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wkj6
STEP: Creating a pod to test subpath
Jun 23 13:19:10.394: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wkj6" in namespace "provisioning-714" to be "Succeeded or Failed"
Jun 23 13:19:10.405: INFO: Pod "pod-subpath-test-preprovisionedpv-wkj6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.876866ms
Jun 23 13:19:12.410: INFO: Pod "pod-subpath-test-preprovisionedpv-wkj6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015046915s
Jun 23 13:19:14.409: INFO: Pod "pod-subpath-test-preprovisionedpv-wkj6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014119232s
Jun 23 13:19:16.413: INFO: Pod "pod-subpath-test-preprovisionedpv-wkj6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018680036s
Jun 23 13:19:18.411: INFO: Pod "pod-subpath-test-preprovisionedpv-wkj6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016979864s
STEP: Saw pod success
Jun 23 13:19:18.412: INFO: Pod "pod-subpath-test-preprovisionedpv-wkj6" satisfied condition "Succeeded or Failed"
Jun 23 13:19:18.415: INFO: Trying to get logs from node nodes-us-central1-a-hmlq pod pod-subpath-test-preprovisionedpv-wkj6 container test-container-volume-preprovisionedpv-wkj6: <nil>
STEP: delete the pod
Jun 23 13:19:18.440: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wkj6 to disappear
Jun 23 13:19:18.444: INFO: Pod pod-subpath-test-preprovisionedpv-wkj6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wkj6
Jun 23 13:19:18.444: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wkj6" in namespace "provisioning-714"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":14,"skipped":164,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:18.633: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
STEP: Destroying namespace "services-4666" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":15,"skipped":166,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:18.886: INFO: Only supported for providers [aws] (not gce)
... skipping 94 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/configmap_volume.go:112
STEP: Creating configMap with name configmap-test-volume-map-bf81eb7a-d6d4-4f84-b28b-b0ff6e806719
STEP: Creating a pod to test consume configMaps
Jun 23 13:19:10.337: INFO: Waiting up to 5m0s for pod "pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee" in namespace "configmap-850" to be "Succeeded or Failed"
Jun 23 13:19:10.340: INFO: Pod "pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.83407ms
Jun 23 13:19:12.345: INFO: Pod "pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008354193s
Jun 23 13:19:14.345: INFO: Pod "pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008777168s
Jun 23 13:19:16.344: INFO: Pod "pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007842108s
Jun 23 13:19:18.345: INFO: Pod "pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008247661s
Jun 23 13:19:20.344: INFO: Pod "pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.007131753s
STEP: Saw pod success
Jun 23 13:19:20.344: INFO: Pod "pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee" satisfied condition "Succeeded or Failed"
Jun 23 13:19:20.348: INFO: Trying to get logs from node nodes-us-central1-a-gl7l pod pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee container agnhost-container: <nil>
STEP: delete the pod
Jun 23 13:19:20.373: INFO: Waiting for pod pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee to disappear
Jun 23 13:19:20.377: INFO: Pod pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:10.086 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/configmap_volume.go:112
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":16,"skipped":138,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:20.452: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 93 lines ...
  test/e2e/kubectl/portforward.go:454
    that expects NO client request
    test/e2e/kubectl/portforward.go:464
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portforward.go:465
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":10,"skipped":53,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:23.206: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
... skipping 26 lines ...
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver "csi-hostpath" does not support topology - skipping

      test/e2e/storage/testsuites/topology.go:93
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":14,"skipped":160,"failed":0}
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:19:06.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 172 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:240
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":10,"skipped":80,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:26.839: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 58 lines ...
      Driver local doesn't support GenericEphemeralVolume -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":88,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:19:17.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:10.057 seconds]
[sig-apps] StatefulSet
test/e2e/apps/framework.go:23
  MinReadySeconds should be honored when enabled
  test/e2e/apps/statefulset.go:1152
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet MinReadySeconds should be honored when enabled","total":-1,"completed":7,"skipped":88,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:27.913: INFO: Only supported for providers [aws] (not gce)
... skipping 62 lines ...
Jun 23 13:19:22.406: INFO: PersistentVolumeClaim pvc-94tw4 found but phase is Pending instead of Bound.
Jun 23 13:19:24.413: INFO: PersistentVolumeClaim pvc-94tw4 found and phase=Bound (10.032616842s)
Jun 23 13:19:24.413: INFO: Waiting up to 3m0s for PersistentVolume local-l7psg to have phase Bound
Jun 23 13:19:24.422: INFO: PersistentVolume local-l7psg found and phase=Bound (9.428217ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-rdxr
STEP: Creating a pod to test exec-volume-test
Jun 23 13:19:24.456: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-rdxr" in namespace "volume-463" to be "Succeeded or Failed"
Jun 23 13:19:24.462: INFO: Pod "exec-volume-test-preprovisionedpv-rdxr": Phase="Pending", Reason="", readiness=false. Elapsed: 5.723067ms
Jun 23 13:19:26.470: INFO: Pod "exec-volume-test-preprovisionedpv-rdxr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01402572s
Jun 23 13:19:28.467: INFO: Pod "exec-volume-test-preprovisionedpv-rdxr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011133363s
Jun 23 13:19:30.468: INFO: Pod "exec-volume-test-preprovisionedpv-rdxr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011307203s
Jun 23 13:19:32.465: INFO: Pod "exec-volume-test-preprovisionedpv-rdxr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009136986s
Jun 23 13:19:34.482: INFO: Pod "exec-volume-test-preprovisionedpv-rdxr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02550324s
STEP: Saw pod success
Jun 23 13:19:34.482: INFO: Pod "exec-volume-test-preprovisionedpv-rdxr" satisfied condition "Succeeded or Failed"
Jun 23 13:19:34.490: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod exec-volume-test-preprovisionedpv-rdxr container exec-container-preprovisionedpv-rdxr: <nil>
STEP: delete the pod
Jun 23 13:19:34.534: INFO: Waiting for pod exec-volume-test-preprovisionedpv-rdxr to disappear
Jun 23 13:19:34.543: INFO: Pod exec-volume-test-preprovisionedpv-rdxr no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-rdxr
Jun 23 13:19:34.543: INFO: Deleting pod "exec-volume-test-preprovisionedpv-rdxr" in namespace "volume-463"
... skipping 32 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":103,"failed":0}
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 13:19:35.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap that has name configmap-test-emptyKey-f1dfeca2-3ccb-456f-8d35-5723683d4999
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 13:19:35.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-315" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":11,"skipped":103,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:35.136: INFO: Only supported for providers [vsphere] (not gce)
... skipping 30 lines ...
Jun 23 13:19:26.947: INFO: Running '/logs/artifacts/e8f43fa1-f2f4-11ec-8dfe-daa417708791/kubectl --server=https://35.202.140.103 --kubeconfig=/root/.kube/config --namespace=kubectl-1291 create -f -'
Jun 23 13:19:27.147: INFO: stderr: ""
Jun 23 13:19:27.147: INFO: stdout: "pod/busybox1 created\n"
Jun 23 13:19:27.147: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [busybox1]
Jun 23 13:19:27.147: INFO: Waiting up to 5m0s for pod "busybox1" in namespace "kubectl-1291" to be "running and ready"
Jun 23 13:19:27.157: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.172059ms
Jun 23 13:19:27.157: INFO: Error evaluating pod condition running and ready: want pod 'busybox1' on 'nodes-us-central1-a-gl7l' to be 'Running' but was 'Pending'
Jun 23 13:19:29.161: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013392921s
Jun 23 13:19:29.161: INFO: Error evaluating pod condition running and ready: want pod 'busybox1' on 'nodes-us-central1-a-gl7l' to be 'Running' but was 'Pending'
Jun 23 13:19:31.161: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013103135s
Jun 23 13:19:31.161: INFO: Error evaluating pod condition running and ready: want pod 'busybox1' on 'nodes-us-central1-a-gl7l' to be 'Running' but was 'Pending'
Jun 23 13:19:33.161: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013216364s
Jun 23 13:19:33.161: INFO: Error evaluating pod condition running and ready: want pod 'busybox1' on 'nodes-us-central1-a-gl7l' to be 'Running' but was 'Pending'
Jun 23 13:19:35.166: INFO: Pod "busybox1": Phase="Running", Reason="", readiness=true. Elapsed: 8.018644724s
Jun 23 13:19:35.166: INFO: Pod "busybox1" satisfied condition "running and ready"
Jun 23 13:19:35.166: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [busybox1]
[It] should copy a file from a running Pod
  test/e2e/kubectl/kubectl.go:1537
STEP: specifying a remote filepath busybox1:/root/foo/bar/foo.bar on the pod
... skipping 24 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl copy
  test/e2e/kubectl/kubectl.go:1520
    should copy a file from a running Pod
    test/e2e/kubectl/kubectl.go:1537
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":11,"skipped":94,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:35.627: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 135 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:240
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":13,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/configmap_volume.go:77
STEP: Creating configMap with name configmap-test-volume-93fe715f-893d-4a40-89fa-3d8053974ed6
STEP: Creating a pod to test consume configMaps
Jun 23 13:19:26.601: INFO: Waiting up to 5m0s for pod "pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816" in namespace "configmap-3294" to be "Succeeded or Failed"
Jun 23 13:19:26.607: INFO: Pod "pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280583ms
Jun 23 13:19:28.613: INFO: Pod "pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011332193s
Jun 23 13:19:30.613: INFO: Pod "pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012249391s
Jun 23 13:19:32.615: INFO: Pod "pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013392453s
Jun 23 13:19:34.613: INFO: Pod "pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012327294s
Jun 23 13:19:36.612: INFO: Pod "pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.010651501s
STEP: Saw pod success
Jun 23 13:19:36.612: INFO: Pod "pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816" satisfied condition "Succeeded or Failed"
Jun 23 13:19:36.614: INFO: Trying to get logs from node nodes-us-central1-a-pp7m pod pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 13:19:36.629: INFO: Waiting for pod pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816 to disappear
Jun 23 13:19:36.634: INFO: Pod pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:10.143 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/configmap_volume.go:77
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":15,"skipped":164,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:36.656: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  test/e2e/framework/framework.go:187

... skipping 25 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 172 lines ...
Jun 23 13:19:36.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-os-rejection
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should reject pod when the node OS doesn't match pod's OS
  test/e2e/common/node/pod_admission.go:38
Jun 23 13:19:36.829: INFO: Waiting up to 2m0s for pod "wrong-pod-os" in namespace "pod-os-rejection-2955" to be "failed with reason PodOSNotSupported"
Jun 23 13:19:36.834: INFO: Pod "wrong-pod-os": Phase="Pending", Reason="", readiness=false. Elapsed: 4.804376ms
Jun 23 13:19:38.843: INFO: Pod "wrong-pod-os": Phase="Failed", Reason="PodOSNotSupported", readiness=false. Elapsed: 2.014048649s
Jun 23 13:19:38.843: INFO: Pod "wrong-pod-os" satisfied condition "failed with reason PodOSNotSupported"
[AfterEach] [sig-node] PodOSRejection [NodeConformance]
  test/e2e/framework/framework.go:187
Jun 23 13:19:38.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-os-rejection-2955" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS","total":-1,"completed":16,"skipped":178,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 13:19:38.879: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:187

... skipping 140 lines ...
Jun 23 13:19:35.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/common/node/init_container.go:164
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: creating the pod
Jun 23 13:19:35.682: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:187
Jun 23 13:19:42.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-563" for this suite.


• [SLOW TEST:7.367 seconds]
[sig-node] InitContainer [NodeConformance]
test/e2e/common/node/framework.go:23
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment","total":-1,"completed":9,"skipped":51,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":12,"skipped":99,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 32106 lines ...






o:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=45\nI0623 13:25:08.208844      10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.018267ms\"\nI0623 13:25:10.037181      10 proxier.go:853] \"Syncing iptables rules\"\nI0623 13:25:10.088657      10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=43\nI0623 13:25:10.104179      10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.98061ms\"\nI0623 13:25:11.043919      10 proxier.go:853] \"Syncing iptables rules\"\nI0623 13:25:11.087530      10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=18 numNATRules=42\nI0623 13:25:11.094421      10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.56782ms\"\nI0623 13:25:13.858670      10 service.go:322] \"Service updated ports\" service=\"services-1551/svc-tolerate-unready\" portCount=1\nI0623 13:25:13.858733      10 service.go:437] \"Adding new service port\" portName=\"services-1551/svc-tolerate-unready:http\" servicePort=\"100.66.15.96:80/TCP\"\nI0623 13:25:13.858776      10 proxier.go:853] \"Syncing iptables rules\"\nI0623 13:25:13.892737      10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=18 numNATRules=42\nI0623 13:25:13.897657      10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.931304ms\"\nI0623 13:25:13.897951      10 proxier.go:853] \"Syncing iptables rules\"\nI0623 13:25:13.936474      10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=18 numNATRules=42\nI0623 13:25:13.942706      10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.005936ms\"\nI0623 13:25:14.539444      10 service.go:322] \"Service updated ports\" service=\"webhook-3725/e2e-test-webhook\" portCount=1\nI0623 13:25:14.672208      10 service.go:322] \"Service updated ports\" service=\"webhook-2773/e2e-test-webhook\" portCount=1\nI0623 13:25:14.943300      10 service.go:437] \"Adding new service port\" portName=\"webhook-3725/e2e-test-webhook\" servicePort=\"100.65.41.207:8443/TCP\"\nI0623 13:25:14.943339      10 service.go:437] \"Adding new service port\" portName=\"webhook-2773/e2e-test-webhook\" servicePort=\"100.64.202.157:8443/TCP\"\nI0623 13:25:14.943441      10 proxier.go:853] \"Syncing iptables rules\"\nI0623 13:25:14.993339      10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=52\nI0623 13:25:15.002843      10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.592646ms\"\nI0623 13:25:15.793367      10 service.go:322] \"Service updated ports\" service=\"webhook-3725/e2e-test-webhook\" portCount=0\nI0623 13:25:15.864332      10 service.go:462] \"Removing service port\" portName=\"webhook-3725/e2e-test-webhook\"\nI0623 13:25:15.864421      10 proxier.go:853] \"Syncing iptables rules\"\nI0623 13:25:15.902426      10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=49\nI0623 13:25:15.908635      10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.312486ms\"\nI0623 13:25:19.370205      10 proxier.go:853] \"Syncing iptables rules\"\nI0623 13:25:19.406734      10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=55\nI0623 13:25:19.412179      10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.03781ms\"\nI0623 13:25:20.490409      10 service.go:322] \"Service updated ports\" service=\"sctp-3778/sctp-endpoint-test\" portCount=1\nI0623 13:25:20.490466      10 service.go:437] \"Adding new service port\" portName=\"sctp-3778/sctp-endpoint-test\" servicePort=\"100.67.101.65:5060/SCTP\"\nI0623 13:25:20.490514      10 proxier.go:853] \"Syncing iptables rules\"\nI0623 13:25:20.538933      10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=55\nI0623 13:25:20.545326      10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.86512ms\"\nI0623 13:25:20.545466      10 proxier.go:853] \"Syncing iptables rules\"\nI0623 13:25:20.599304      10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=55\nI0623 13:25:20.607474      10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.098855ms\"\nI0623 13:25:25.027194      10 service.go:322] \"Service updated ports\" service=\"webhook-5315/e2e-test-webhook\" portCount=1\nI0623 13:25:25.027259      10 service.go:437] \"Adding new service port\" portName=\"webhook-5315/e2e-test-webhook\" servicePort=\"100.69.23.202:8443/TCP\"\nI0623 13:25:25.027861      10 proxier.go:853] \"Syncing iptables rules\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-nodes-us-central1-a-pp7m ====\n==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-master-us-central1-a-llg0 ====\n2022/06/23 13:08:39 Running command:\nCommand env: (log-file=/var/log/kube-scheduler.log, also-stdout=true, redirect-stderr=true)\nRun from directory: \nExecutable path: /usr/local/bin/kube-scheduler\nArgs (comma-delimited): /usr/local/bin/kube-scheduler,--authentication-kubeconfig=/var/lib/kube-scheduler/kubeconfig,--authorization-kubeconfig=/var/lib/kube-scheduler/kubeconfig,--config=/var/lib/kube-scheduler/config.yaml,--leader-elect=true,--tls-cert-file=/srv/kubernetes/kube-scheduler/server.crt,--tls-private-key-file=/srv/kubernetes/kube-scheduler/server.key,--v=2\n2022/06/23 13:08:39 Now listening for interrupts\nI0623 13:08:39.998051      10 flags.go:64] FLAG: --add-dir-header=\"false\"\nI0623 13:08:39.998108      10 flags.go:64] FLAG: --allow-metric-labels=\"[]\"\nI0623 13:08:39.998117      10 flags.go:64] FLAG: --alsologtostderr=\"false\"\nI0623 13:08:39.998122      10 flags.go:64] FLAG: --authentication-kubeconfig=\"/var/lib/kube-scheduler/kubeconfig\"\nI0623 13:08:39.998129      10 flags.go:64] FLAG: --authentication-skip-lookup=\"false\"\nI0623 13:08:39.998147      10 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl=\"10s\"\nI0623 13:08:39.998153      10 flags.go:64] FLAG: --authentication-tolerate-lookup-failure=\"true\"\nI0623 13:08:39.998159      10 flags.go:64] FLAG: --authorization-always-allow-paths=\"[/healthz,/readyz,/livez]\"\nI0623 13:08:39.998171      10 flags.go:64] FLAG: --authorization-kubeconfig=\"/var/lib/kube-scheduler/kubeconfig\"\nI0623 13:08:39.998176      10 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl=\"10s\"\nI0623 13:08:39.998181      10 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"10s\"\nI0623 13:08:39.998185      10 flags.go:64] FLAG: --bind-address=\"0.0.0.0\"\nI0623 13:08:39.998191      10 flags.go:64] FLAG: --cert-dir=\"\"\nI0623 13:08:39.998196      10 flags.go:64] FLAG: --client-ca-file=\"\"\nI0623 13:08:39.998200      10 flags.go:64] FLAG: --config=\"/var/lib/kube-scheduler/config.yaml\"\nI0623 13:08:39.998205      10 flags.go:64] FLAG: --contention-profiling=\"true\"\nI0623 13:08:39.998209      10 flags.go:64] FLAG: --disabled-metrics=\"[]\"\nI0623 13:08:39.998216      10 flags.go:64] FLAG: --feature-gates=\"\"\nI0623 13:08:39.998223      10 flags.go:64] FLAG: --help=\"false\"\nI0623 13:08:39.998229      10 flags.go:64] FLAG: --http2-max-streams-per-connection=\"0\"\nI0623 13:08:39.998236      10 flags.go:64] FLAG: --kube-api-burst=\"100\"\nI0623 13:08:39.998245      10 flags.go:64] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0623 13:08:39.998251      10 flags.go:64] FLAG: --kube-api-qps=\"50\"\nI0623 13:08:39.998261      10 flags.go:64] FLAG: --kubeconfig=\"\"\nI0623 13:08:39.998266      10 flags.go:64] FLAG: --leader-elect=\"true\"\nI0623 13:08:39.998271      10 flags.go:64] FLAG: --leader-elect-lease-duration=\"15s\"\nI0623 13:08:39.998277      10 flags.go:64] FLAG: --leader-elect-renew-deadline=\"10s\"\nI0623 13:08:39.998283      10 flags.go:64] FLAG: --leader-elect-resource-lock=\"leases\"\nI0623 13:08:39.998288      10 flags.go:64] FLAG: --leader-elect-resource-name=\"kube-scheduler\"\nI0623 13:08:39.998294      10 flags.go:64] FLAG: --leader-elect-resource-namespace=\"kube-system\"\nI0623 13:08:39.998299      10 flags.go:64] FLAG: --leader-elect-retry-period=\"2s\"\nI0623 13:08:39.998305      10 flags.go:64] FLAG: --lock-object-name=\"kube-scheduler\"\nI0623 13:08:39.998311      10 flags.go:64] FLAG: --lock-object-namespace=\"kube-system\"\nI0623 13:08:39.998317      10 flags.go:64] FLAG: --log-backtrace-at=\":0\"\nI0623 13:08:39.998327      10 flags.go:64] FLAG: --log-dir=\"\"\nI0623 13:08:39.998333      10 flags.go:64] FLAG: --log-file=\"\"\nI0623 13:08:39.998338      10 flags.go:64] FLAG: --log-file-max-size=\"1800\"\nI0623 13:08:39.998344      10 flags.go:64] FLAG: --log-flush-frequency=\"5s\"\nI0623 13:08:39.998350      10 flags.go:64] FLAG: --log-json-info-buffer-size=\"0\"\nI0623 13:08:39.998361      10 flags.go:64] FLAG: --log-json-split-stream=\"false\"\nI0623 13:08:39.998366      10 flags.go:64] FLAG: --logging-format=\"text\"\nI0623 13:08:39.998372      10 flags.go:64] FLAG: --logtostderr=\"true\"\nI0623 13:08:39.998378      10 flags.go:64] FLAG: --master=\"\"\nI0623 13:08:39.998383      10 flags.go:64] FLAG: --one-output=\"false\"\nI0623 13:08:39.998388      10 flags.go:64] FLAG: --permit-address-sharing=\"false\"\nI0623 13:08:39.998394      10 flags.go:64] FLAG: --permit-port-sharing=\"false\"\nI0623 13:08:39.998399      10 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration=\"5m0s\"\nI0623 13:08:39.998405      10 flags.go:64] FLAG: --profiling=\"true\"\nI0623 13:08:39.998411      10 flags.go:64] FLAG: --requestheader-allowed-names=\"[]\"\nI0623 13:08:39.998419      10 flags.go:64] FLAG: --requestheader-client-ca-file=\"\"\nI0623 13:08:39.998432      10 flags.go:64] FLAG: --requestheader-extra-headers-prefix=\"[x-remote-extra-]\"\nI0623 13:08:39.998446      10 flags.go:64] FLAG: --requestheader-group-headers=\"[x-remote-group]\"\nI0623 13:08:39.998460      10 flags.go:64] FLAG: --requestheader-username-headers=\"[x-remote-user]\"\nI0623 13:08:39.998466      10 flags.go:64] FLAG: --secure-port=\"10259\"\nI0623 13:08:39.998471      10 flags.go:64] FLAG: --show-hidden-metrics-for-version=\"\"\nI0623 13:08:39.998476      10 flags.go:64] FLAG: --skip-headers=\"false\"\nI0623 13:08:39.998481      10 flags.go:64] FLAG: --skip-log-headers=\"false\"\nI0623 13:08:39.998486      10 flags.go:64] FLAG: --stderrthreshold=\"2\"\nI0623 13:08:39.998492      10 flags.go:64] FLAG: --tls-cert-file=\"/srv/kubernetes/kube-scheduler/server.crt\"\nI0623 13:08:39.998500      10 flags.go:64] FLAG: --tls-cipher-suites=\"[]\"\nI0623 13:08:39.998509      10 flags.go:64] FLAG: --tls-min-version=\"\"\nI0623 13:08:39.998514      10 flags.go:64] FLAG: --tls-private-key-file=\"/srv/kubernetes/kube-scheduler/server.key\"\nI0623 13:08:39.998520      10 flags.go:64] FLAG: --tls-sni-cert-key=\"[]\"\nI0623 13:08:39.998528      10 flags.go:64] FLAG: --v=\"2\"\nI0623 13:08:39.998536      10 flags.go:64] FLAG: --version=\"false\"\nI0623 13:08:39.998544      10 flags.go:64] FLAG: --vmodule=\"\"\nI0623 13:08:39.998552      10 flags.go:64] FLAG: --write-config-to=\"\"\nI0623 13:08:40.000507      10 dynamic_serving_content.go:113] \"Loaded a new cert/key pair\" name=\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\"\nW0623 13:08:40.749965      10 authentication.go:346] Error looking up in-cluster authentication configuration: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.750008      10 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.\nW0623 13:08:40.750030      10 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false\nI0623 13:08:40.757714      10 configfile.go:96] \"Using component config\" config=<\n\tapiVersion: kubescheduler.config.k8s.io/v1beta2\n\tclientConnection:\n\t  acceptContentTypes: \"\"\n\t  burst: 100\n\t  contentType: application/vnd.kubernetes.protobuf\n\t  kubeconfig: /var/lib/kube-scheduler/kubeconfig\n\t  qps: 50\n\tenableContentionProfiling: true\n\tenableProfiling: true\n\thealthzBindAddress: \"\"\n\tkind: KubeSchedulerConfiguration\n\tleaderElection:\n\t  leaderElect: true\n\t  leaseDuration: 15s\n\t  renewDeadline: 10s\n\t  resourceLock: leases\n\t  resourceName: kube-scheduler\n\t  resourceNamespace: kube-system\n\t  retryPeriod: 2s\n\tmetricsBindAddress: \"\"\n\tparallelism: 16\n\tpercentageOfNodesToScore: 0\n\tpodInitialBackoffSeconds: 1\n\tpodMaxBackoffSeconds: 10\n\tprofiles:\n\t- pluginConfig:\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      kind: DefaultPreemptionArgs\n\t      minCandidateNodesAbsolute: 100\n\t      minCandidateNodesPercentage: 10\n\t    name: DefaultPreemption\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      hardPodAffinityWeight: 1\n\t      kind: InterPodAffinityArgs\n\t    name: InterPodAffinity\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      kind: NodeAffinityArgs\n\t    name: NodeAffinity\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      kind: NodeResourcesBalancedAllocationArgs\n\t      resources:\n\t      - name: cpu\n\t        weight: 1\n\t      - name: memory\n\t        weight: 1\n\t    name: NodeResourcesBalancedAllocation\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      kind: NodeResourcesFitArgs\n\t      scoringStrategy:\n\t        resources:\n\t        - name: cpu\n\t          weight: 1\n\t        - name: memory\n\t          weight: 1\n\t        type: LeastAllocated\n\t    name: NodeResourcesFit\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      defaultingType: System\n\t      kind: PodTopologySpreadArgs\n\t    name: PodTopologySpread\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      bindTimeoutSeconds: 600\n\t      kind: VolumeBindingArgs\n\t    name: VolumeBinding\n\t  plugins:\n\t    bind:\n\t      enabled:\n\t      - name: DefaultBinder\n\t        weight: 0\n\t    filter:\n\t      enabled:\n\t      - name: NodeUnschedulable\n\t        weight: 0\n\t      - name: NodeName\n\t        weight: 0\n\t      - name: TaintToleration\n\t        weight: 0\n\t      - name: NodeAffinity\n\t        weight: 0\n\t      - name: NodePorts\n\t        weight: 0\n\t      - name: NodeResourcesFit\n\t        weight: 0\n\t      - name: VolumeRestrictions\n\t        weight: 0\n\t      - name: EBSLimits\n\t        weight: 0\n\t      - name: GCEPDLimits\n\t        weight: 0\n\t      - name: NodeVolumeLimits\n\t        weight: 0\n\t      - name: AzureDiskLimits\n\t        weight: 0\n\t      - name: VolumeBinding\n\t        weight: 0\n\t      - name: VolumeZone\n\t        weight: 0\n\t      - name: PodTopologySpread\n\t        weight: 0\n\t      - name: InterPodAffinity\n\t        weight: 0\n\t    multiPoint: {}\n\t    permit: {}\n\t    postBind: {}\n\t    postFilter:\n\t      enabled:\n\t      - name: DefaultPreemption\n\t        weight: 0\n\t    preBind:\n\t      enabled:\n\t      - name: VolumeBinding\n\t        weight: 0\n\t    preFilter:\n\t      enabled:\n\t      - name: NodeResourcesFit\n\t        weight: 0\n\t      - name: NodePorts\n\t        weight: 0\n\t      - name: VolumeRestrictions\n\t        weight: 0\n\t      - name: PodTopologySpread\n\t        weight: 0\n\t      - name: InterPodAffinity\n\t        weight: 0\n\t      - name: VolumeBinding\n\t        weight: 0\n\t      - name: NodeAffinity\n\t        weight: 0\n\t    preScore:\n\t      enabled:\n\t      - name: InterPodAffinity\n\t        weight: 0\n\t      - name: PodTopologySpread\n\t        weight: 0\n\t      - name: TaintToleration\n\t        weight: 0\n\t      - name: NodeAffinity\n\t        weight: 0\n\t    queueSort:\n\t      enabled:\n\t      - name: PrioritySort\n\t        weight: 0\n\t    reserve:\n\t      enabled:\n\t      - name: VolumeBinding\n\t        weight: 0\n\t    score:\n\t      enabled:\n\t      - name: NodeResourcesBalancedAllocation\n\t        weight: 1\n\t      - name: ImageLocality\n\t        weight: 1\n\t      - name: InterPodAffinity\n\t        weight: 1\n\t      - name: NodeResourcesFit\n\t        weight: 1\n\t      - name: NodeAffinity\n\t        weight: 1\n\t      - name: PodTopologySpread\n\t        weight: 2\n\t      - name: TaintToleration\n\t        weight: 1\n\t  schedulerName: default-scheduler\n >\nI0623 13:08:40.757795      10 server.go:147] \"Starting Kubernetes Scheduler\" version=\"v1.25.0-alpha.1\"\nI0623 13:08:40.757811      10 server.go:149] \"Golang settings\" GOGC=\"\" GOMAXPROCS=\"\" GOTRACEBACK=\"\"\nI0623 13:08:40.759625      10 configmap_cafile_content.go:202] \"Starting controller\" name=\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\"\nI0623 13:08:40.759720      10 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0623 13:08:40.759747      10 dynamic_serving_content.go:132] \"Starting controller\" name=\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\"\nI0623 13:08:40.759722      10 tlsconfig.go:200] \"Loaded serving cert\" certName=\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\" certDetail=\"\\\"kube-scheduler\\\" [serving] validServingFor=[kube-scheduler.kube-system.svc.cluster.local] issuer=\\\"kubernetes-ca\\\" (2022-06-21 13:07:23 +0000 UTC to 2023-10-12 11:07:23 +0000 UTC (now=2022-06-23 13:08:40.759687462 +0000 UTC))\"\nI0623 13:08:40.760817      10 named_certificates.go:53] \"Loaded SNI cert\" index=0 certName=\"self-signed loopback\" certDetail=\"\\\"apiserver-loopback-client@1655989720\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\"apiserver-loopback-client-ca@1655989720\\\" (2022-06-23 12:08:40 +0000 UTC to 2023-06-23 12:08:40 +0000 UTC (now=2022-06-23 13:08:40.760782081 +0000 UTC))\"\nI0623 13:08:40.761059      10 secure_serving.go:210] Serving securely on [::]:10259\nI0623 13:08:40.761235      10 tlsconfig.go:240] \"Starting DynamicServingCertificateController\"\nW0623 13:08:40.766040      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.766193      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.766544      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.766768      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.772805      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.772959      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.773136      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.773227      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.773402      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.773494      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.774180      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.774302      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.774451      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.774582      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.774903      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.775193      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.775760      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.775886      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.776102      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.776220      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.776546      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.776654      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.783933      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.785909      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.785200      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.785964      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.785470      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.786006      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:40.786726      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:40.786859      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:41.598364      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:41.598427      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:41.648437      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:41.648613      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:41.725299      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:41.725437      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:41.740490      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:41.740646      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:41.765347      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:41.765686      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:41.773489      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:41.773644      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:41.779479      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:41.779665      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:41.801383      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:41.801538      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:41.845396      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:41.845566      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:41.943880      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:41.944052      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:42.046232      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:42.046402      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:42.098273      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:42.098453      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:42.132120      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:42.132447      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:42.215584      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:42.215746      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:42.326405      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:42.326565      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:43.501015      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:43.501084      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:43.938321      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:43.938482      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:44.012819      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:44.012970      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:44.164651      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:44.164914      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:44.218689      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:44.218835      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:44.282057      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:44.282222      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:44.506872      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:44.507036      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:44.545819      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:44.545974      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:44.757693      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:44.758050      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:44.780090      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:44.780257      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:44.872349      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:44.872503      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:44.923190      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:44.923330      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:44.940111      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:44.940270      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:45.157280      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:45.157457      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:45.433542      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:45.433849      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:47.303854      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:47.303993      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:47.804398      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:47.804552      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:48.230988      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:48.231259      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:48.369076      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:48.369127      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:48.505128      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:48.505173      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:48.618270      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:48.618500      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:48.911169      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:48.911225      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:49.349400      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:49.349499      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:49.507472      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:49.507527      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:49.891512      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:49.891648      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:50.125743      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:50.126058      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:50.168063      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:50.168111      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:50.574497      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:50.574549      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:50.933036      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:50.933085      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:51.020576      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:51.020670      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:55.694353      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:55.694412      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:56.424983      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:56.425235      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:56.659177      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:56.659234      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:56.772573      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:56.772851      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:58.137057      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:58.137227      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:58.667152      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:58.667297      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:58.858890      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:58.859082      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:59.273970      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:59.274115      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:59.473962      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:59.474129      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:59.617435      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:59.617580      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:08:59.784745      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:08:59.784845      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:09:00.226236      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:09:00.226303      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:09:00.747313      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:09:00.747361      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:09:02.672717      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:09:02.672903      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:09:03.015456      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 13:09:03.015592      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 13:09:21.305520      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0623 13:09:21.305723      10 trace.go:205] Trace[655906402]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 13:09:11.304) (total time: 10001ms):\nTrace[655906402]: ---\"Objects listed\" error:Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout 10001ms (13:09:21.305)\nTrace[655906402]: [10.001320287s] [10.001320287s] END\nE0623 13:09:21.305818      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nW0623 13:09:21.872854      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0623 13:09:21.873050      10 trace.go:205] Trace[750544542]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 13:09:11.871) (total time: 10001ms):\nTrace[750544542]: ---\"Objects listed\" error:Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout 10000ms (13:09:21.872)\nTrace[750544542]: [10.001147467s] [10.001147467s] END\nE0623 13:09:21.873117      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nW0623 13:09:22.673735      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0623 13:09:22.673948      10 trace.go:205] Trace[640556636]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 13:09:12.672) (total time: 10001ms):\nTrace[640556636]: ---\"Objects listed\" error:Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout 10001ms (13:09:22.673)\nTrace[640556636]: [10.001836472s] [10.001836472s] END\nE0623 13:09:22.674035      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nW0623 13:09:24.152135      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0623 13:09:24.152409      10 trace.go:205] Trace[1751576531]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 13:09:14.150) (total time: 10001ms):\nTrace[1751576531]: ---\"Objects listed\" error:Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout 10001ms (13:09:24.152)\nTrace[1751576531]: [10.0018841s] [10.0018841s] END\nE0623 13:09:24.152481      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nW0623 13:09:26.381578      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0623 13:09:26.381655      10 trace.go:205] Trace[824446186]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 13:09:16.379) (total time: 10001ms):\nTrace[824446186]: ---\"Objects listed\" error:Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout 10001ms (13:09:26.381)\nTrace[824446186]: [10.001782223s] [10.001782223s] END\nE0623 13:09:26.381675      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nW0623 13:09:26.470181      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0623 13:09:26.470258      10 trace.go:205] Trace[89338941]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 13:09:16.467) (total time: 10002ms):\nTrace[89338941]: ---\"Objects listed\" error:Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout 10002ms (13:09:26.470)\nTrace[89338941]: [10.002971163s] [10.002971163s] END\nE0623 13:09:26.470283      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nW0623 13:09:30.437354      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43202->127.0.0.1:443: read: connection reset by peer\nE0623 13:09:30.437703      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43202->127.0.0.1:443: read: connection reset by peer\nW0623 13:09:30.438056      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43220->127.0.0.1:443: read: connection reset by peer\nE0623 13:09:30.438289      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43220->127.0.0.1:443: read: connection reset by peer\nW0623 13:09:30.438331      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43190->127.0.0.1:443: read: connection reset by peer\nW0623 13:09:30.438679      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43212->127.0.0.1:443: read: connection reset by peer\nE0623 13:09:30.438827      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43212->127.0.0.1:443: read: connection reset by peer\nI0623 13:09:30.438852      10 trace.go:205] Trace[1287574074]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 13:09:19.472) (total time: 10966ms):\nTrace[1287574074]: ---\"Objects listed\" error:Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43190->127.0.0.1:443: read: connection reset by peer 10966ms (13:09:30.438)\nTrace[1287574074]: [10.96668871s] [10.96668871s] END\nE0623 13:09:30.439223      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43190->127.0.0.1:443: read: connection reset by peer\nW0623 13:09:30.438523      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43192->127.0.0.1:443: read: connection reset by peer\nI0623 13:09:30.439498      10 trace.go:205] Trace[716561333]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 13:09:19.698) (total time: 10741ms):\nTrace[716561333]: ---\"Objects listed\" error:Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43192->127.0.0.1:443: read: connection reset by peer 10740ms (13:09:30.438)\nTrace[716561333]: [10.741405s] [10.741405s] END\nE0623 13:09:30.439606      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43192->127.0.0.1:443: read: connection reset by peer\nW0623 13:09:30.439493      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43224->127.0.0.1:443: read: connection reset by peer\nE0623 13:09:30.439813      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43224->127.0.0.1:443: read: connection reset by peer\nW0623 13:09:30.439853      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43218->127.0.0.1:443: read: connection reset by peer\nE0623 13:09:30.440095      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43218->127.0.0.1:443: read: connection reset by peer\nW0623 13:09:30.440123      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43206->127.0.0.1:443: read: connection reset by peer\nE0623 13:09:30.440386      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43206->127.0.0.1:443: read: connection reset by peer\nW0623 13:09:30.439102      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43208->127.0.0.1:443: read: connection reset by peer\nE0623 13:09:30.440811      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:43208->127.0.0.1:443: read: connection reset by peer\nI0623 13:10:13.060431      10 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0623 13:10:13.060969      10 tlsconfig.go:178] \"Loaded client CA\" index=0 certName=\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\" certDetail=\"\\\"kubernetes-ca\\\" [] issuer=\\\"<self>\\\" (2022-06-21 13:05:45 +0000 UTC to 2032-06-20 13:05:45 +0000 UTC (now=2022-06-23 13:10:13.060927098 +0000 UTC))\"\nI0623 13:10:13.061364      10 tlsconfig.go:200] \"Loaded serving cert\" certName=\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\" certDetail=\"\\\"kube-scheduler\\\" [serving] validServingFor=[kube-scheduler.kube-system.svc.cluster.local] issuer=\\\"kubernetes-ca\\\" (2022-06-21 13:07:23 +0000 UTC to 2023-10-12 11:07:23 +0000 UTC (now=2022-06-23 13:10:13.06131679 +0000 UTC))\"\nI0623 13:10:13.061685      10 named_certificates.go:53] \"Loaded SNI cert\" index=0 certName=\"self-signed loopback\" certDetail=\"\\\"apiserver-loopback-client@1655989720\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\"apiserver-loopback-client-ca@1655989720\\\" (2022-06-23 12:08:40 +0000 UTC to 2023-06-23 12:08:40 +0000 UTC (now=2022-06-23 13:10:13.061641969 +0000 UTC))\"\nI0623 13:10:20.579712      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"master-us-central1-a-llg0\" zone=\"\"\nI0623 13:10:21.662732      10 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...\nI0623 13:10:21.670170      10 leaderelection.go:258] successfully acquired lease kube-system/kube-scheduler\nI0623 13:10:21.671300      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-dd657c749-ns2h8\" err=\"0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.\"\nI0623 13:10:21.693450      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/kops-controller-b6qx6\" node=\"master-us-central1-a-llg0\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:10:21.706752      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/cloud-controller-manager-mwdgd\" node=\"master-us-central1-a-llg0\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:10:21.717870      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/dns-controller-78bc9bdd66-rxxpt\" node=\"master-us-central1-a-llg0\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:10:21.722377      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-5d4dbc7b59-gn5kn\" err=\"0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.\"\nI0623 13:10:21.751716      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-node-826hl\" node=\"master-us-central1-a-llg0\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:10:21.761411      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-controller-9f559494d-ck9c2\" node=\"master-us-central1-a-llg0\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:10:38.766848      10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"master-us-central1-a-llg0\" zone=\"\"\nI0623 13:10:38.766908      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"master-us-central1-a-llg0\" zone=\"us-central1:\\x00:us-central1-a\"\nI0623 13:10:39.815321      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/metadata-proxy-v0.12-m6h96\" node=\"master-us-central1-a-llg0\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:11:08.092080      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-g3vq\" zone=\"\"\nI0623 13:11:08.160268      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-node-q8rzb\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:11:08.231638      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-hmlq\" zone=\"\"\nI0623 13:11:08.280693      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-node-ngbw7\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:11:08.516231      10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"nodes-us-central1-a-g3vq\" zone=\"\"\nI0623 13:11:08.517062      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-g3vq\" zone=\"us-central1:\\x00:us-central1-a\"\nI0623 13:11:09.010155      10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"nodes-us-central1-a-hmlq\" zone=\"\"\nI0623 13:11:09.010190      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-hmlq\" zone=\"us-central1:\\x00:us-central1-a\"\nI0623 13:11:09.363023      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/metadata-proxy-v0.12-2xk8x\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:11:10.325320      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-gl7l\" zone=\"\"\nI0623 13:11:10.376122      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-node-lhmxf\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:11:10.431383      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/metadata-proxy-v0.12-2nnxv\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:11:10.720804      10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"nodes-us-central1-a-gl7l\" zone=\"\"\nI0623 13:11:10.721211      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-gl7l\" zone=\"us-central1:\\x00:us-central1-a\"\nI0623 13:11:11.494289      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-pp7m\" zone=\"\"\nI0623 13:11:11.547551      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-node-cshsz\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:11:11.822680      10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"nodes-us-central1-a-pp7m\" zone=\"\"\nI0623 13:11:11.822721      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-pp7m\" zone=\"us-central1:\\x00:us-central1-a\"\nI0623 13:11:12.091714      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/metadata-proxy-v0.12-wg9l8\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:11:13.434206      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/metadata-proxy-v0.12-gd8sn\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:11:31.126569      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/coredns-dd657c749-ns2h8\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:11:31.136039      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/coredns-autoscaler-5d4dbc7b59-gn5kn\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:11:34.771033      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/coredns-dd657c749-czzst\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=2\nI0623 13:14:41.110874      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1424/hostexec-nodes-us-central1-a-pp7m-cv89h\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:14:41.285429      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1378/hostexec-nodes-us-central1-a-hmlq-gp9lt\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:14:41.320374      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-852/hostexec-nodes-us-central1-a-hmlq-84m9m\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:14:41.340715      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-7752/pod-configmaps-6a477237-74a9-4da1-96d0-ea74302e893b\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.344998      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-8853/ss-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.381269      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2114/hostexec-nodes-us-central1-a-gl7l-kzxl5\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:14:41.421739      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-6571/test-grpc-059d8e1c-fdd2-4b12-9df3-8d657bce1fc3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.460836      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-3218/e2e-configmap-dns-server-c744a3d0-86e8-4806-be8e-c2ccb1a688f4\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.568819      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-8457/simpletest.deployment-b57fb94fd-k86h7\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.569233      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-8457/simpletest.deployment-b57fb94fd-8bs57\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.571122      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"hostpath-2893/pod-host-path-test\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.632704      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"subpath-1033/pod-subpath-test-secret-4j4p\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.713291      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-1753/downward-api-347f9438-a836-4dd8-b0d9-cfed593c3cb3\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.771395      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-5203/termination-message-container4db63e96-e5f9-4f0a-9666-97691adeb945\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.820448      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4859/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:14:41.841248      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-7355/test-dns-nameservers\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.859937      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4859/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:14:41.869174      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4859/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:14:41.898892      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4859/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:14:41.956784      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-2981/httpd\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:41.957839      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-8824/pod-projected-configmaps-148a145e-d790-4535-902e-6bec9220d1f1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:42.391094      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4077-445/csi-mockplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:14:42.579323      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9758/test-orphan-deployment-68c48f9ff9-xkt26\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:42.669107      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5103-5916/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:14:42.871661      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2027-7810/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:14:42.937052      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9623/frontend-565c96755d-v2x48\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:42.963407      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9623/frontend-565c96755d-k4g2q\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:42.968657      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6365-561/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:14:42.970142      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9623/frontend-565c96755d-sm7d6\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:43.055733      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-6365/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:14:43.083695      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9623/agnhost-primary-688946dd86-qmd76\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:43.252524      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9623/agnhost-replica-765bc5456b-b2r2b\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:43.282783      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9623/agnhost-replica-765bc5456b-gfdd8\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:43.384390      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-2684/test-deployment-5c5999c99b-zpxhq\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:43.399702      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-2684/test-deployment-5c5999c99b-2jb8b\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:47.860418      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"init-container-7730/pod-init-baf760ee-7979-4b9f-999c-243b3444f205\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:48.499572      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-6365/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:14:48.987814      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-8854/downwardapi-volume-b8572ef8-b3db-43f9-809f-0d9bbec49e21\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:49.455753      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7415/pod-subpath-test-inlinevolume-flrn\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:14:50.011343      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-hj2qz\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.024694      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-n7ptr\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.027365      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-mv9kt\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.061888      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-w46lb\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.068048      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-2nt8t\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.068178      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-rlszl\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.068418      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-xs46s\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.081886      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-zbq6b\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.111150      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-l2nxv\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.116500      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-c4hpg\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.122649      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-q4n7q\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.122705      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-8w9bl\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.123011      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-tbcl8\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.155822      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-2qlcm\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.155917      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-5xkbt\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.213876      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-cg7k6\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.213940      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-7hxfj\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.214047      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-vsvgh\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.214157      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-mprqj\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.214682      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-crkjr\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.215369      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-j6bzf\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.215741      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-mnszn\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.215817      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-mpzss\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.227864      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-rz7pm\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.241890      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-wf4dx\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.242186      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-rpn9s\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.244518      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-xn62n\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.250322      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-x7j2q\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.256957      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-bd7s4\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.293023      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-5vtsh\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.350776      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-n7bt5\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.441479      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-7k9cj\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.507352      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-xpq79\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.551813      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-6b62p\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.593419      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-d6jrn\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.643002      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-8cd82\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.698561      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-k5cbv\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.761828      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-m7vtn\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.776662      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-1344-7380/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:14:50.796632      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-qxxck\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.842660      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-74w4m\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.849601      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-6365/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:14:50.892681      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-cmx8h\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.943785      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-6mw2d\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:50.989786      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-gqdmw\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.039910      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-lp5d2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.095501      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-bg5lh\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.143792      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-8cbj4\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.197582      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-vnd9m\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.221973      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-8853/ss-1\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.249440      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-bh62j\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.312202      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-gmmcq\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.356181      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-m4jxf\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.399780      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-twwh2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.440677      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-tvw72\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.500164      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-tvt4s\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.545353      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-4l69l\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.590997      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-v8t95\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.641668      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-c9l2g\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.693347      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-5x2st\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.741491      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-twlfd\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.791743      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-zbx68\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.845289      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-5dq95\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.898514      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-5q58p\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.950880      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-qmjc4\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:51.991963      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-f29jp\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.098254      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-vb598\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.146855      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-89grl\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.190445      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-w55th\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.244961      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-qbp7f\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.293925      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-2hlls\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.342346      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-hhqfb\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.391168      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-sk8vq\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.441070      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-znclq\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.495292      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-4ftfv\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.543575      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-ghk29\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.590909      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-w2gmz\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.640729      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-ztfmb\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.701408      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-x77cd\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.745678      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-vfcbg\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.798847      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-5v6b5\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.866559      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-fgchl\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.910099      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-dgkm8\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:52.975612      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-ggkkx\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.022762      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-lblkh\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.076930      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-qph8v\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.115885      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-xmzn2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.155535      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-lf67r\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.202431      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-czssg\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.244488      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-f7gbq\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.297041      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-r4mz2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.348823      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-nwlng\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.402931      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-n9rlk\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.445612      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-x9b8g\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.493293      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-krp6g\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.542965      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-td529\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.592266      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-79jk6\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.641811      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-5dsxj\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.697009      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-x8m42\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.760019      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-q7wrt\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.806647      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-cgxr6\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.842206      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-qsjn9\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:53.893695      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-564/simpletest-rc-to-be-deleted-bk6db\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:14:54.570238      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-852/pod-359c62db-a159-4b8b-8b5a-af470ddfbbfe\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:14:54.853139      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-6365/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:14:55.675909      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1424/pod-subpath-test-preprovisionedpv-fvmp\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:14:55.752797      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1378/pod-8efed973-b6bc-48bd-964c-83b1a445043d\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:00.608430      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-5672/test-rollover-controller-dp59d\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:02.858298      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-6365/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:15:03.482628      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:03.500354      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:04.860462      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:04.861596      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:05.544339      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pfpod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:05.688626      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4077/pvc-volume-tester-984dv\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:06.579835      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-4205/agnhost-primary-xgxv4\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:06.862600      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:06.869827      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:06.870372      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pfpod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:07.795658      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-5693/downwardapi-volume-5964c29e-4000-4a7f-b8d5-1b6f284967b8\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:07.808645      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-1164/annotationupdate2f30341b-6f37-487f-ad47-1e338d9d473c\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:09.478806      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-3218/e2e-dns-utils\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:09.864896      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pfpod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:10.056717      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2114/pod-subpath-test-preprovisionedpv-6k97\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:10.574096      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:10.865657      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:11.866428      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:11.936815      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"containers-7831/client-containers-c042778d-12b0-4ae6-a6d2-257bd6016679\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:12.037557      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1378/pod-d5dac86c-1ba7-43ae-9304-e940d01305fb\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:12.063828      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-2981/run-log-test\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:12.204790      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-2684/test-deployment-7df46cf5c9-4dqmn\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:12.868507      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-6365/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:15:13.355489      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-896/pod-projected-configmaps-3a0265d1-c57a-4c76-aa21-fc7cc0f79428\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:13.661759      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-5672/test-rollover-deployment-86bb575c96-c97nz\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:13.869261      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pfpod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:14.702550      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-1840/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:15:14.886507      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-5600/deployment-6c468f5898-xr7g8\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:14.908444      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-5600/deployment-6c468f5898-q8bqc\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nE0623 13:15:14.920497      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"deployment-6c468f5898-gnfjp\\\" is forbidden: unable to create new content in namespace apply-5600 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"apply-5600/deployment-6c468f5898-gnfjp\"\nI0623 13:15:14.920883      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"apply-5600/deployment-6c468f5898-gnfjp\"\nE0623 13:15:14.921208      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"deployment-6c468f5898-gnfjp\\\" is forbidden: unable to create new content in namespace apply-5600 because it is being terminated\" pod=\"apply-5600/deployment-6c468f5898-gnfjp\"\nE0623 13:15:14.929592      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-6c468f5898-q8bqc.16fb423c56ac4def\", GenerateName:\"\", Namespace:\"apply-5600\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"apply-5600\", Name:\"deployment-6c468f5898-q8bqc\", UID:\"c0014016-555f-4131-9008-89135baebfe2\", APIVersion:\"v1\", ResourceVersion:\"3205\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned apply-5600/deployment-6c468f5898-q8bqc to nodes-us-central1-a-hmlq\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 15, 14, 908392943, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 15, 14, 908392943, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-6c468f5898-q8bqc.16fb423c56ac4def\" is forbidden: unable to create new content in namespace apply-5600 because it is being terminated' (will not retry!)\nE0623 13:15:14.934667      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-6c468f5898-gnfjp.16fb423c577458c1\", GenerateName:\"\", Namespace:\"apply-5600\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"apply-5600\", Name:\"deployment-6c468f5898-gnfjp\", UID:\"400bffc6-6a93-4abc-a135-2b453f032880\", APIVersion:\"v1\", ResourceVersion:\"3207\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"deployment-6c468f5898-gnfjp\\\" is forbidden: unable to create new content in namespace apply-5600 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 15, 14, 921502913, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 15, 14, 921502913, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-6c468f5898-gnfjp.16fb423c577458c1\" is forbidden: unable to create new content in namespace apply-5600 because it is being terminated' (will not retry!)\nI0623 13:15:15.177491      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3720/pod-service-account-b007b009-5a82-4334-8bd1-1cfe085f2b21\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:15.497784      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-1138/implicit-nonroot-uid\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:15.698060      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-3859/explicit-nonroot-uid\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:15.774638      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-5672/test-rollover-deployment-58d566676d-7qnnp\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:16.507706      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replicaset-7916/test-rs-c45xr\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:17.481462      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8048/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:17.507042      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8048/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:17.513892      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8048/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:17.521995      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8048/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:18.423447      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"tables-2549/pod-1\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:18.587894      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2033/hostexec-nodes-us-central1-a-hmlq-xpbxp\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:20.209421      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2114/pod-subpath-test-preprovisionedpv-6k97\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:20.329994      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-4676-3727/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:20.866439      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-1344/pod-6df05c14-78b8-483a-958d-afa5ed080433\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:21.621104      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3783/hostexec-nodes-us-central1-a-hmlq-fnp22\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:22.002360      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4859/test-container-pod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:22.002613      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6400/hostexec-nodes-us-central1-a-g3vq-766jd\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:22.009592      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4859/host-test-container-pod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:22.332647      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1838/hostexec-nodes-us-central1-a-hmlq-ff6nd\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:22.878373      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-6365/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:15:23.597796      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replicaset-7916/test-rs-r2ngq\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:23.864820      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8376/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:23.893406      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8376/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:23.894778      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8376/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:23.924766      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8376/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:25.629144      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4229/ss-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:26.060987      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7525/hostexec-nodes-us-central1-a-pp7m-fm95n\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:26.504171      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9866/e2e-test-httpd-pod\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:27.041435      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-5384/ss-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:27.246910      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-2684/test-deployment-7df46cf5c9-hr2s2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:27.269792      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-2684/test-deployment-577d99f66-c7xbv\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:27.659676      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-5749/dns-test-8c0e7a5c-7139-45e6-931d-b09105c31f85\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:28.056356      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-1317/dns-test-cc9be1c9-1a92-42b3-a2f0-f775ea660265\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:28.540166      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"crd-webhook-7881/sample-crd-conversion-webhook-deployment-646fc49456-4nqfj\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:28.799034      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5103/pod-subpath-test-dynamicpv-z5xp\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:29.017051      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2027/pod-subpath-test-dynamicpv-9dq9\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:32.506914      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-9423/downward-api-1e2a340d-fbc6-4f25-a5b9-75dc62dac443\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:32.708225      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replication-controller-5011/my-hostname-basic-4548d4a2-0ff6-42c6-a77e-296d1e2e8223-9cbxz\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:32.845898      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-2684/test-deployment-577d99f66-74w67\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:32.916666      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6365/hostpath-injector\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:33.851992      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3783/pod-cc39cc6a-3adc-443a-96da-f5313f671e18\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:34.238003      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7525/pod-a1d64b74-1825-479c-98dc-43710823cbe6\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:35.177959      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replicaset-8949/pod-adoption-release\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:36.128940      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-9419/security-context-eef1f0c5-3270-4451-b7cf-83c3fc7591cd\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:36.801000      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-8300/test-webserver-d75e0700-02e1-4a24-b479-04f4f3bb1271\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:36.923812      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"aggregator-9310/sample-apiserver-deployment-84c5d6865b-xgzqz\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:38.117275      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-1317/dns-test-4e0a629e-36d0-40bf-a266-5eec55a1aa13\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:38.149694      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2354/pod-subpath-test-inlinevolume-qtx8\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:38.986871      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2033/pod-subpath-test-preprovisionedpv-knlw\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:39.575449      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8048/test-container-pod\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:40.412155      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6400/pod-5d6d7afb-9dc1-41cc-9af9-bb171b9c4959\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:40.754073      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1838/pod-subpath-test-preprovisionedpv-rb6d\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:40.834476      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-2876/indexed-job-0-jkjwq\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:40.855856      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-2876/indexed-job-1-sj428\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:42.401709      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-2629/test-rolling-update-controller-ld57r\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:44.840899      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-4752/downwardapi-volume-2570b4bc-7646-40ca-b611-1a9fc0b03015\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:45.028387      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-7077/pod-2ee34857-88f0-4f18-9a12-52fec54b0b39\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:46.244255      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replicaset-8949/pod-adoption-release-j5zbp\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:46.433297      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7525/pod-d46cef15-fdbe-43be-b594-59a774d51054\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:47.147736      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-5384/ss-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:47.748056      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8425-4075/csi-mockplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:48.538989      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-2876/indexed-job-2-wb2w9\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:48.571117      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-2876/indexed-job-3-v24q9\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:48.941192      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-1344/pod-070a0cd1-9297-476b-bf37-12b5b5424e6c\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:50.602284      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6400/pod-96ee6f5b-9475-4f95-93b1-0bb4aecd2a5f\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:51.326063      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4453/hostexec-nodes-us-central1-a-pp7m-wwmct\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:51.449956      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-2629/test-rolling-update-deployment-8684b45d9-n9lrw\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:51.993317      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8376/test-container-pod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:52.004370      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8376/host-test-container-pod\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:53.207522      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"subpath-8952/pod-subpath-test-downwardapi-hwqf\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:53.343201      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-603/agnhost-primary-vrjp2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:53.686499      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2557-3890/csi-mockplugin-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:53.748943      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2557-3890/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:54.940068      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3109/pod-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:54.940375      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3109/pod-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:54.975430      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3109/pod-2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:57.449712      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4139/slow-terminating-unready-pod-6jkdg\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:57.788801      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-3134/pod-secrets-310647a0-87ec-407a-86b8-914f0a19416f\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:58.710747      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-2195/sample-webhook-deployment-5f8b6c9658-grgxc\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:59.091288      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6689/hostexec-nodes-us-central1-a-pp7m-km6xw\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:15:59.632178      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4453/pod-8fbe2064-f8b3-4202-be03-51cfbc931ccc\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:15:59.811873      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"prestop-8371/pod-prestop-hook-d8889006-e49e-49cd-b8cf-e0c46c663569\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:15:59.833573      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8425/pvc-volume-tester-4cf46\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:00.927694      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-8627/ss2-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:02.235207      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4734/nodeport-test-9bjwg\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:02.243540      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4734/nodeport-test-zhrf4\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:02.350322      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-1317/dns-test-1cc644f0-bce0-4200-8d0d-d05d9d4a9a85\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:02.379902      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/up-down-1-lz89k\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:02.426310      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/up-down-1-vjgnb\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:02.426711      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/up-down-1-ffs9d\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:03.599401      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replication-controller-5070/my-hostname-private-9e3a2966-c9c1-4797-b588-fb9a653d8924-xr9mf\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:03.825122      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-3749/busybox-09f4897a-a8f8-4773-b8af-8ba26205dd85\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:03.880797      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-6365/hostpath-client\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:16:04.139431      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-5273/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:04.157159      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-5273/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:04.194130      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-5273/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:04.205631      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-5273/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:04.614005      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4229/ss-1\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:04.929574      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6365/hostpath-client\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:05.345426      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-3172/pod-configmaps-1b057df8-2747-46a6-9357-db0cb8ab042d\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:05.532941      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4139/execpod-8xdcf\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:07.085247      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-4633/pod-projected-configmaps-718da02e-11c8-4854-b02f-c6ccd7d712f9\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:09.157896      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-8627/ss2-1\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:09.271854      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6689/pod-79df485d-2153-4b69-8c24-dfe13639a126\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:10.771212      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-9605/probe-test-938ad93d-7790-45d4-aebf-025d0264ec6d\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:11.254208      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4734/execpodmnnp5\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:11.459274      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/up-down-2-w8n26\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:11.473239      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/up-down-2-8t2kd\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:11.481914      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/up-down-2-c4cfn\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:12.063724      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2557/pvc-volume-tester-nzn85\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:12.445939      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-664/liveness-ed560815-c47b-4459-9497-5b745f661117\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:12.521823      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-7483/pod-secrets-746f85c8-f487-469c-9e97-b70a05b646dc\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:13.240098      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-6820/pod-secrets-f806eac9-7ab8-48a2-9a27-7118a37f8c93\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:14.000667      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-5384/ss-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:15.121537      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-8627/ss2-2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:15.456490      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-9518/termination-message-container4ce8ee19-3e02-485e-8333-c24a43177e45\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:18.826980      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3742/pod-subpath-test-inlinevolume-kxjv\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:19.398972      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-1398/pod-size-memory-volume-ec9c2a34-c1bc-4ce6-9327-e182caeefb7b\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:19.478524      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6689/pod-5758abe7-6c60-4ed0-a88b-8c99da4b3958\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:19.590066      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-1870/hostexec-nodes-us-central1-a-g3vq-w2vmk\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:20.106518      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"conntrack-3637/pod-client\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:21.002437      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9714/hostexec-nodes-us-central1-a-g3vq-8vqg2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:21.065991      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-8244/httpd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:21.168825      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3112-7835/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:21.195947      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3112-7835/csi-mockplugin-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:21.399588      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-2438/test-deployment-fxlqf-6465649447-k2qkd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:22.321457      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-2805/pod-600a4e7b-647b-4914-a5f4-f7083167521e\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:22.857073      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1068-5367/csi-mockplugin-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:22.901451      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1068-5367/csi-mockplugin-resizer-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:23.518604      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:24.136570      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"conntrack-3637/pod-server-1\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:25.810140      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-945/inline-volume-6pc4s\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-6pc4s-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:16:26.487368      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8928/hostexec-nodes-us-central1-a-gl7l-xgnlr\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:27.710417      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"hostpath-1481/pod-host-path-test\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:28.115189      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2557/inline-volume-74hnl\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:28.306185      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-5273/test-container-pod\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:28.380920      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-945/inline-volume-tester-b4rjs\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-b4rjs-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:16:28.450827      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-945-9972/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:29.527573      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/verify-service-up-exec-pod-fkt6g\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:29.751512      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-1870/hostport\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:29.941409      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-945/inline-volume-tester-b4rjs\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:16:30.059148      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5935/hostexec-nodes-us-central1-a-gl7l-fvzx5\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:31.942237      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-945/inline-volume-tester-b4rjs\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:16:32.772504      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3112/pvc-volume-tester-nv4ns\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:33.224519      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-3165/busybox-user-0-dfa19db9-f82c-4596-9395-c3749e59a6c7\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:33.770471      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-1870/hostexec-nodes-us-central1-a-g3vq-fb9z7\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:34.428850      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1068/pvc-volume-tester-vcwsq\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:34.724675      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:34.792486      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-3383/agnhost-primary-9ddxt\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:34.793269      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-378/image-pull-testce0fbf64-da23-4d3a-a622-b108a5a57a84\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:34.844674      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-7755/test-recreate-deployment-7569c79777-5qsdl\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:35.172707      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-8244/failure-1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:35.955928      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-945/inline-volume-tester-b4rjs\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:36.550818      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4229/ss-2\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:37.888566      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-1870/hostexec-nodes-us-central1-a-g3vq-d5sw8\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:38.285448      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"conntrack-3637/pod-server-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:38.686991      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5522/server-envvars-d027234e-971f-4184-b952-c3fe12dbcd08\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:39.267661      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8928/pod-subpath-test-preprovisionedpv-kgg7\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:39.565459      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9714/pod-23743212-622f-4f60-9fb7-f5565b36d7bf\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:40.191165      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-5666/downwardapi-volume-98ae5106-b165-4f14-b75c-016c41d39b4c\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:40.377455      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5935/pod-subpath-test-preprovisionedpv-hvv9\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:41.945816      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7446/hostexec-nodes-us-central1-a-pp7m-5ktjt\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:42.755227      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/verify-service-up-exec-pod-6gwbb\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:42.953258      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-7755/test-recreate-deployment-6ff6c9b95f-mfvzt\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:43.190064      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-7933/test-pod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:44.372752      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5069/hostexec-nodes-us-central1-a-g3vq-4qljt\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:45.324547      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-8627/ss2-2\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:45.953958      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-6224/pod-configmaps-544b7e92-96e5-401d-aac7-166c8ab92bd5\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:46.054436      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9714/pod-fb322876-95b5-474a-9f92-5913cd4d9a23\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:46.329036      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1459/hostexec-nodes-us-central1-a-hmlq-ws7d8\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:46.509174      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1068/pvc-volume-tester-zf6kl\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:48.740819      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5522/client-envvars-a3eb8c34-77c9-4063-bb0f-f149e7cca6cf\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:49.560941      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-9060/pod-subpath-test-inlinevolume-xkqt\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:50.449444      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5935/pod-subpath-test-preprovisionedpv-hvv9\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:51.379766      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-7933/test-host-network-pod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:51.433109      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-8627/ss2-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:51.468107      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-3766/httpd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:52.019020      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/verify-service-down-host-exec-pod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:54.145692      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7446/pod-subpath-test-preprovisionedpv-wnrl\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:54.236735      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7349/hostexec-nodes-us-central1-a-pp7m-q2xp4\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:54.630585      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8766/hostexec-nodes-us-central1-a-g3vq-7m7kt\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:54.907301      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1459/pod-subpath-test-preprovisionedpv-7v8g\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:55.223562      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumelimits-9866-4503/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:55.263517      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5069/pod-subpath-test-preprovisionedpv-9fhq\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:16:56.058443      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9813/hairpin\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:56.796902      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8541/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:56.829227      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8541/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:56.838484      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8541/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:56.857705      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8541/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:16:58.761830      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-8222/pod-projected-secrets-cc398057-7837-40ab-9f66-9700b4257991\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:16:59.228701      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-6215/hostexec-nodes-us-central1-a-pp7m-9gsc7\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:00.150681      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2970/simple-27599837-2bwm5\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:00.364175      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-8627/ss2-2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:00.677919      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-5384/ss-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:01.078373      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:01.078635      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:01.079281      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:02.324524      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-9474/pod-subpath-test-inlinevolume-mlq8\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:04.266076      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:05.666901      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:06.079120      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"runtimeclass-8008/test-runtimeclass-runtimeclass-8008-preconfigured-handler-njpx4\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:06.860185      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-4249/hostexec-nodes-us-central1-a-hmlq-ssl46\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:07.600167      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-3766/run-test\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:07.794849      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-463/pod-44e6d537-ca28-4f7a-9799-a0db7330f676\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:08.241674      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5173/pod-subpath-test-inlinevolume-288m\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:08.871078      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8766/pod-subpath-test-preprovisionedpv-pclk\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:08.978598      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"conntrack-1169/pod-client\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:09.060697      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-3512/sample-webhook-deployment-5f8b6c9658-vpqwr\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:09.559263      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-6215/exec-volume-test-preprovisionedpv-ltp8\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:10.104879      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:10.206415      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-159/busybox-ed892f6f-ce64-4a6e-acf1-8f8b2a273676\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:10.447180      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7349/pod-subpath-test-preprovisionedpv-865f\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:11.014659      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-2032/busybox-privileged-false-43573846-bded-42ad-b2b3-920cbac73d1b\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:11.862219      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3254-5349/csi-mockplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:11.872692      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3254-5349/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:12.269179      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:12.538286      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:13.680119      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-8627/ss2-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:13.938533      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-4422/httpd\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:14.219386      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-2226/externalsvc-4fpq4\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:14.257340      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-2226/externalsvc-htx8k\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:15.385188      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1737-4675/csi-mockplugin-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:15.591783      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:15.921945      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4229/ss-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:16.299905      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/verify-service-up-exec-pod-7f4k5\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:16.356599      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-1760/security-context-0d61ee79-4775-46fc-941f-41c8ac2f4f50\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:17.273199      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9581/httpd\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:17.597124      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:17.949886      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-361/hostexec-nodes-us-central1-a-gl7l-5bw47\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:18.508485      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-3\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:19.003429      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:21.042411      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"conntrack-1169/pod-server-1\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:21.380579      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8973-8636/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:21.396448      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8973-8636/csi-mockplugin-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:21.516660      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/up-down-3-pw2z8\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:21.536897      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/up-down-3-qzjnb\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:21.537462      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/up-down-3-v55vf\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:22.461271      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-2834/hostexec-nodes-us-central1-a-gl7l-8g2xh\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:22.886580      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3254/pvc-volume-tester-7wnbz\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:22.947230      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8541/test-container-pod\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:22.964294      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8541/host-test-container-pod\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:23.113437      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-4\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:23.278083      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-2226/execpod5s54q\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:24.409350      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-5384/ss-1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:25.592173      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-4249/exec-volume-test-preprovisionedpv-z6jp\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:25.636254      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1049/externalip-test-jgk8c\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:25.654515      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1049/externalip-test-f2n68\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:26.126948      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-361/pod-171c8ca8-b620-4cd4-9622-89ab1c8fbdc9\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:17:26.195071      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-3\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:28.935135      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-4\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:29.100165      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-709/pod-configmaps-d0d1734a-d0ca-42b0-bfb0-393a7035f6d5\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:29.325653      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-4\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:29.376752      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9581/success\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:29.949713      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-8627/ss2-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:31.340552      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-5\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:32.685240      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-5\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:32.727700      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-2073/security-context-821f4368-56ca-486d-821b-3e48fcba9ff4\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:33.520713      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4229/ss-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:33.542321      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:33.732513      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1737/pvc-volume-tester-b858q\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:34.124289      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-5\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:34.264629      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-3766/run-test-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:34.691201      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1049/execpodg274k\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:34.917469      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-3440/busybox-readonly-true-de0c157b-9cbb-43c0-829a-23c858b579f3\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:35.021467      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-test-2596/busybox-readonly-fs7292df4a-829f-4782-8810-2eac2a6fec7b\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:35.546462      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5156-9512/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:35.594594      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-5156/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:17:35.934653      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-6\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:36.892651      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8973/pvc-volume-tester-g8j2g\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:36.990428      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-5156/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:17:37.100670      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3286/hostexec-nodes-us-central1-a-g3vq-dkrdr\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:37.198582      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-6\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:39.336084      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3337-99/csi-mockplugin-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:39.368741      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3337-99/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:39.577082      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/verify-service-up-exec-pod-lnwh6\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:39.992519      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-5156/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:17:40.325935      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-6182/httpd\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:40.342891      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-7\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:40.694988      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-2834/pod-340a3180-2f30-4c09-b0de-5c65bc9ab813\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:17:41.120364      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-9060/hostexec-nodes-us-central1-a-g3vq-wz9tw\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:41.606651      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-6\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:42.468984      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-7\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:43.513770      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7107/hostexec-nodes-us-central1-a-hmlq-fzszd\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:44.002576      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5156/hostpath-injector\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:45.798270      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-8\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:46.330637      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3337/pvc-volume-tester-8gc6p\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:46.588594      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:46.672861      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-7\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:49.041935      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-6908/test-pod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:49.627327      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-5384/ss-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:49.688148      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-2344/inline-volume-tester-w7dzk\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:49.755730      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-2344-916/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:49.787801      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-3766/run-test-3\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:51.707913      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-9551/test-grpc-962c2c0c-ac07-49ce-a6c9-210d2e93c2ba\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:51.788833      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-8\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:51.997981      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-2805/suspend-true-to-false-dszl8\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:52.015376      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-2805/suspend-true-to-false-8t8dd\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:52.187218      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8510/hostexec-nodes-us-central1-a-g3vq-x8ldl\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:52.724288      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-2834/hostexec-nodes-us-central1-a-gl7l-5p76s\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:53.404827      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-8\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:54.527959      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-4501/liveness-bd233eb1-529f-48c5-aec2-4d1e977b9b37\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:54.603308      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8190/verify-service-up-exec-pod-8d2b4\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:55.138809      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6308/pod-subpath-test-inlinevolume-c4zb\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:55.195887      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-9\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:55.289309      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3286/pod-subpath-test-preprovisionedpv-j7kc\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:55.480146      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-9060/local-injector\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:55.703498      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7107/pod-dadd7d99-7895-4a33-8e91-3ae0f1bdf402\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:17:56.968020      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8510/pod-773b7626-056a-4f45-a015-4be53cd899fb\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:17:57.014737      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6569/hostexec-nodes-us-central1-a-pp7m-gwglr\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:17:57.804773      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4229/ss-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:17:59.728842      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-10\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:00.133326      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"cronjob-5830/forbid-27599838-8fhnk\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:02.098471      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-2805/suspend-true-to-false-h4lt5\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:02.115574      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-2805/suspend-true-to-false-h67vw\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:02.120433      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-9\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:02.404228      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3880/hostexec-nodes-us-central1-a-hmlq-6k895\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:02.791163      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-11\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:03.166714      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8510/pod-105d00fb-3aeb-4150-a282-8e5ba6e3f965\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:03.362354      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3286/pod-subpath-test-preprovisionedpv-j7kc\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:04.383491      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-93/downward-api-2a9a4256-356c-4560-ab73-df2dc3afdd00\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:05.388993      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7742/hostexec-nodes-us-central1-a-hmlq-lr8m9\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:07.072574      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-9\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:08.194076      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-12\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:08.281867      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-5156/hostpath-client\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:18:08.720173      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-9060/local-client\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:09.200373      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6569/pod-subpath-test-preprovisionedpv-4r6c\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:09.893671      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7107/pod-97064ca5-2a43-485f-b772-ae6c5b7b9480\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:10.023619      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5156/hostpath-client\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:10.424977      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-10\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:10.654731      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-7579/pod1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:10.688789      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-7579/pod2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:10.697576      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-7579/pod3\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:11.978613      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-7924/pod-update-919f96cf-6655-4190-b614-1d672a0f2612\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:13.634718      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"containers-9045/client-containers-0aeaab8f-9929-4188-96b6-4e16e5870318\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:13.669316      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-10\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:13.990261      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-13\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:15.858702      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8036/hostexec-nodes-us-central1-a-pp7m-mrtpc\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:16.686621      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pvc-protection-6843/pvc-tester-h6bxv\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:17.586798      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7742/pod-1a471b4d-b359-41a2-9ff3-2bd0c69c9add\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:17.676771      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-5677/pod-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:18.257045      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-6531/pod-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:18.283987      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-6531/pod-1\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:18.294837      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-6531/pod-2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:18.803034      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-11\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:20.392784      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-8629/test-new-deployment-68c48f9ff9-mrfhg\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:21.423178      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-12\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:22.485854      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pvc-protection-3766/pvc-tester-n67h9\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:22.614644      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-11\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:24.015380      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8036/pod-2aeb725c-62ec-4c80-a317-f06816a5dfcb\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:24.392814      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-2-14\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:24.572487      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3880/pod-subpath-test-preprovisionedpv-2zfq\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:26.702495      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-8302/image-pull-test282c76b6-405c-49a5-b706-82eac7cc6d3a\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:26.805142      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-705-8848/csi-mockplugin-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:26.846436      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-7720/ss-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:27.337558      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2043/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:27.345879      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2043/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:27.363923      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2043/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:27.374631      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2043/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:27.805720      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7742/pod-1b524a03-c8aa-4192-a3a7-e04bb4875883\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:31.966226      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2804/pod-subpath-test-inlinevolume-6gnj\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:32.640699      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-864/sample-webhook-deployment-5f8b6c9658-b5m92\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:32.790133      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"mount-propagation-5976/hostexec-nodes-us-central1-a-g3vq-rcddt\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:32.890924      10 volume_binding.go:338] \"Failed to bind volumes for pod\" pod=\"csi-mock-volumes-705/pvc-volume-tester-j5q8n\" err=\"binding volumes: provisioning failed for PVC \\\"pvc-k7hsg\\\"\"\nE0623 13:18:32.891347      10 framework.go:1013] \"Failed running PreBind plugin\" err=\"binding volumes: provisioning failed for PVC \\\"pvc-k7hsg\\\"\" plugin=\"VolumeBinding\" pod=\"csi-mock-volumes-705/pvc-volume-tester-j5q8n\"\nE0623 13:18:32.891693      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"running PreBind plugin \\\"VolumeBinding\\\": binding volumes: provisioning failed for PVC \\\"pvc-k7hsg\\\"\" pod=\"csi-mock-volumes-705/pvc-volume-tester-j5q8n\"\nI0623 13:18:34.042088      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-13\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:35.059724      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-705/pvc-volume-tester-j5q8n\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:36.004113      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-12\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:36.315890      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3041/hostexec-nodes-us-central1-a-g3vq-jq8nf\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:36.907203      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"subpath-9419/pod-subpath-test-projected-qf9d\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:38.466407      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-1784/metadata-volume-dfcc1502-c6dd-4aee-a4db-e60332864bac\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:38.656825      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-8931/pod-secrets-2e31c777-bd53-4fe1-9d37-8537c47fd78b\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:38.914144      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-68c48f9ff9-zrzj7\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:38.914196      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-68c48f9ff9-fsb2f\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:38.921717      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-68c48f9ff9-xb44n\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:38.955322      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-68c48f9ff9-wb66m\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:38.962250      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-68c48f9ff9-8dvh4\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:38.962629      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-68c48f9ff9-j7wsw\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:40.188406      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-13\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:41.019892      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-54484b94f8-8b9z6\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:41.083888      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-54484b94f8-hhwkv\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:41.832033      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-2877/dns-test-8b5b1d62-a81e-4759-be28-bc4f4478dc35\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:42.338347      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-820/inline-volume-lzl2b\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-lzl2b-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:18:42.524857      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7188/rs-hg7gq\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:42.565198      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7188/rs-47bfw\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:42.565874      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7188/rs-7pgzv\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:42.616910      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7188/rs-mvh2w\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:42.617481      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7188/rs-mk5wv\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:42.618169      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7188/rs-wtt74\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:42.617797      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7188/rs-dhhxk\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:42.668377      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7188/rs-qvlsb\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:42.748535      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7188/rs-9lhwl\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:42.749654      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7188/rs-w2hf2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:42.868977      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8251/hostexec-nodes-us-central1-a-pp7m-hphpk\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:42.935251      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"containers-9237/client-containers-77f88a9b-1ee5-483d-9770-173957da4173\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:43.395125      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-1-14\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:44.469370      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9352/test-pod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:18:44.809471      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3041/pod-e86181cc-3d70-4a41-bc22-da697ead9226\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:44.896870      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-820/inline-volume-tester-tphh8\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-tphh8-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:18:44.937371      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-820-6139/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:44.982127      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"containers-7573/client-containers-244c44fd-ded5-4132-9030-75d6c08a1007\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:45.115579      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-7101/hostexec-nodes-us-central1-a-pp7m-b2mv4\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:45.627405      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-54484b94f8-rkzzf\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:46.042102      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-820/inline-volume-tester-tphh8\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:18:46.681084      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5490/pod-terminate-status-0-14\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:48.415561      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-6178/liveness-1b313088-f8b1-4e4f-9709-0dbdef9db117\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:48.646702      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-5697/pod-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:48.659839      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-5697/pod-1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:48.668848      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-5697/pod-2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:48.807258      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-54484b94f8-csf4s\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:48.979438      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-2762/externalname-service-5j2wh\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:48.983359      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-2762/externalname-service-l9xll\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:49.175429      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3041/pod-cdfcc05f-3990-4eb8-be37-390a91aa8ef2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:49.657108      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-820/inline-volume-tester-tphh8\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:18:50.521441      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9352/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:18:50.675750      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-54484b94f8-hlv2w\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:50.703638      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-68c48f9ff9-fkjl5\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:52.322198      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-3655/sample-webhook-deployment-5f8b6c9658-kbx66\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:52.841196      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-54484b94f8-lfx7n\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:52.913141      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-54484b94f8-ldcnq\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:53.192721      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replicaset-8934/test-rs-4glvv\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:53.632440      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-7505/sample-webhook-deployment-5f8b6c9658-9wz7t\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:54.062352      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-820/inline-volume-tester-tphh8\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:54.571984      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7188/rs-94pcp\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nE0623 13:18:54.584644      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-94pcp.16fb426f7ba44a6f\", GenerateName:\"\", Namespace:\"disruption-7188\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-7188\", Name:\"rs-94pcp\", UID:\"11922a14-b436-427d-bf71-da4a28eb4a99\", APIVersion:\"v1\", ResourceVersion:\"12975\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned disruption-7188/rs-94pcp to nodes-us-central1-a-g3vq\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 18, 54, 571956847, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 18, 54, 571956847, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-94pcp.16fb426f7ba44a6f\" is forbidden: unable to create new content in namespace disruption-7188 because it is being terminated' (will not retry!)\nI0623 13:18:54.673996      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"var-expansion-6107/var-expansion-43c347d6-d91d-4908-8eac-a91abba00248\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:55.139723      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8251/exec-volume-test-preprovisionedpv-8msf\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:18:55.273729      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-7101/pod-688dab8d-45c4-49f8-b8d2-44c196624744\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:18:55.427183      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2043/test-container-pod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:57.637023      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-4371/inline-volume-7r22l\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-7r22l-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:18:58.204179      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replicaset-8934/test-rs-g7x5t\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:58.245125      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replicaset-8934/test-rs-dzcwl\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:18:59.318004      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"var-expansion-6930/var-expansion-1db11587-a7d7-4682-97fc-76be7cc14c1c\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:00.104223      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-714/hostexec-nodes-us-central1-a-hmlq-wx22g\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:00.363211      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-4371/inline-volume-tester-5j8rr\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-5j8rr-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:19:00.419507      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-4371-7735/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:01.007172      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-2762/execpodmrmxd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:01.296178      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-7101/hostexec-nodes-us-central1-a-pp7m-qkzsv\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:01.463909      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3537/hostexec-nodes-us-central1-a-pp7m-s78lb\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:01.581870      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"runtimeclass-560/test-runtimeclass-runtimeclass-560-preconfigured-handler-w4tzs\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:01.949086      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2396-9709/csi-mockplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:01.998567      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2396-9709/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:02.227844      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-6813/metadata-volume-9e5d2e60-2d86-4d69-9e40-d59a7578f307\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:03.090972      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2625-434/csi-mockplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:03.116362      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2625-434/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:03.464619      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-7720/ss-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:03.729706      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-463/hostexec-nodes-us-central1-a-pp7m-vj99n\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:04.890344      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-5384/ss-2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:05.072101      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"port-forwarding-1086/pfpod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:05.714040      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-3840/pod-0ad668fe-7d18-48f9-b873-b6628dc62c37\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:06.317462      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-6083/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:19:06.333745      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-6083/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:19:06.358595      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-6083/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:19:06.404504      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-6083/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:19:07.087333      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-4371/inline-volume-tester-5j8rr\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:07.574638      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-54484b94f8-b6p58\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:10.343225      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-850/pod-configmaps-7bcb60cf-a667-4520-b1ea-c6f0b13fc0ee\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:10.409324      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-714/pod-subpath-test-preprovisionedpv-wkj6\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:10.812259      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-1263/ss2-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:11.813665      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3332/hostexec-nodes-us-central1-a-gl7l-ghvw9\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:13.498092      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2396/pvc-volume-tester-7vpg2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:14.217591      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5329/hostexec-nodes-us-central1-a-g3vq-mrtdx\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:14.621098      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2625/pvc-volume-tester-tb5hn\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:15.893395      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"endpointslice-3451/pod1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:15.914228      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"endpointslice-3451/pod2\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:17.897197      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-2150/test-ss-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:19.696305      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-1263/ss2-1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:20.442436      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-6083/test-container-pod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:20.513477      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9629/update-demo-nautilus-xp9c9\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:20.516659      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9629/update-demo-nautilus-9qdsx\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:20.588715      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-9322/startup-74749532-740e-4ad5-920f-330bccbf557d\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:23.160672      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-7720/ss-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:23.281440      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"prestop-2695/server\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:24.094831      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3332/pod-subpath-test-preprovisionedpv-pqpv\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:24.375308      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5329/pod-33bb882d-d6d6-4865-8a88-ac16370be56d\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:19:24.463895      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-463/exec-volume-test-preprovisionedpv-rdxr\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:25.715016      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3537/pod-455faeed-d71e-4fed-a10d-a54312a595d4\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:19:26.535425      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-74455cd588-8bp4m\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:26.608185      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-3294/pod-configmaps-635f2a7e-dcfc-45dd-afa7-4c82cc25a816\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:27.153539      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-1291/busybox1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:28.024194      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-1144/pod-secrets-63d1fe07-b4a3-45c3-bfa2-f3471a34fc61\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:29.392639      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-1263/ss2-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:31.301872      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"prestop-2695/tester\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:33.243012      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-74455cd588-7q6s4\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:33.612792      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-5384/ss-1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:34.522412      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4586/webserver-74455cd588-dmvc5\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:35.204377      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"port-forwarding-4496/pfpod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:35.693696      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"init-container-563/pod-init-5e8e1bbd-dbb7-4394-a9ee-7d78654eec0e\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:36.151849      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3332/pod-subpath-test-preprovisionedpv-pqpv\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:36.456667      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-4182-7177/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:38.545215      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-4182/pod-67cd4de5-cf32-4f42-95a8-ba3d24cec815\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:19:38.967474      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-7638/hostexec-nodes-us-central1-a-hmlq-n2jm7\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:39.000072      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-1546/update-demo-nautilus-m6skx\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:39.032202      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-1546/update-demo-nautilus-6nwcz\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:40.604725      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-4182/hostexec-nodes-us-central1-a-g3vq-cjcgf\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:42.981417      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9629/update-demo-nautilus-st6w9\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:43.097675      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-437/inline-volume-rxlgk\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-rxlgk-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:19:43.166952      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2162/hostexec-nodes-us-central1-a-hmlq-h47c5\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:44.762221      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-333/hostexec-nodes-us-central1-a-pp7m-7t9cz\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:45.449952      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-5582/liveness-d5607c84-f5e0-41dd-a5e5-c738b4587251\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:45.472905      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-437/inline-volume-tester-67tvn\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-67tvn-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:19:45.538801      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-437-8961/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:46.044999      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-8661/pod-always-succeed5bc4cb74-135b-4a71-901a-01281a0b22d3\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:46.491482      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5929-7533/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:46.561316      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-1483/pod-projected-configmaps-d17d5b4e-0ef6-4919-aec8-598105b328a0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:46.594260      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1915-9882/csi-mockplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:50.249697      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2208/simpletest.rc-4f8jn\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:50.254144      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2208/simpletest.rc-tb66s\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:50.556754      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1978-3568/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:51.125698      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-437/inline-volume-tester-67tvn\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:52.545961      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5929/pod-subpath-test-dynamicpv-225f\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:53.663908      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1915/pvc-volume-tester-m5l2m\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:54.008948      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-5384/ss-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.218944      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-4twdk\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.241216      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-cz7lf\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.246797      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-rw6fk\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.287212      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-hltkm\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.305004      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-4qtrr\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.307364      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-j9trh\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.322130      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-2mrjr\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.322217      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-vxsnt\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.344685      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-s9d9j\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.352575      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-n28pb\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.357238      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-96ttf\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.360128      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-pbxpx\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.411623      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-wrjfw\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.411965      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-l5pbh\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.412017      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-cfr59\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.421833      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-n9dnx\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.446121      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-gzx6b\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.451933      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-gnrsl\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.462367      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-hphc6\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.462468      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-4ccqk\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.462515      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-n94rn\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.464108      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-dhmwp\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.464549      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-pzfwd\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.464629      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-k6qdv\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.464695      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-c7lv7\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.464757      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-zzrct\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.464820      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-skzr7\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.464867      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-5nqw6\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.481864      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-2s6fv\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.508876      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-2wkk9\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.537057      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-8zhx5\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.622718      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-5lp86\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.671164      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-nzj95\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.719439      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-2j46v\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.765923      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-qq2w2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.815183      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-vplhm\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.865633      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-mdtxh\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.895038      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-333/pod-f9198599-7a20-4bcc-87a4-fb7b75ce116a\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 4 node(s) didn't match Pod's node affinity/selector, 4 node(s) had volume node affinity conflict. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:19:54.922397      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-v6brj\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:54.964677      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-hbnrj\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.033975      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-vbnqc\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.076482      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-8r9h5\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.119668      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-jlnl2\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.200592      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-fgrs5\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.244065      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-qcsww\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.281282      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-pbl7f\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.331259      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-q4jlp\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.385508      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-8nwjk\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.397228      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2162/pod-subpath-test-preprovisionedpv-ls7z\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:55.407804      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-7638/local-injector\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:19:55.439459      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-pqz4s\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.482886      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-gmkpf\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.524370      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-wjmrn\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.580715      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-5w92b\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.623153      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-qqzwq\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.706449      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-tmb26\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.725081      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-c6vtq\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.778137      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-f9r79\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.827117      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-ncpnz\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.879055      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-lvc4z\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.927123      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-z6frn\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:55.972755      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-hc6gq\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.021217      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-gql4r\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.085477      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-c2dkr\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.134752      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-rqds9\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.176630      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-62dqw\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.320491      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-hklr9\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.369337      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-c99q7\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.444573      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-c9m66\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.475735      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-gpl5j\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.516982      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-sm874\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.575072      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-2tlcj\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.615742      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-dps4t\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.665992      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-g4xpp\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.720008      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-qdjqg\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.778909      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-ts7vj\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.816616      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-9r8zs\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.869338      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-mkqht\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.920701      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-hc5sv\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:56.935257      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-333/pod-f9198599-7a20-4bcc-87a4-fb7b75ce116a\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-hn24g\\\" is being deleted. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:19:56.991560      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-chcd2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.030751      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-g97lt\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.076668      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-6w572\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.120156      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-pmkhl\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.171516      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-hmsbk\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.228933      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-87gdd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.286278      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-86mtv\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.332968      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-8k9b6\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.387530      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-wt622\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.429641      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-9rkzm\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.502238      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-td25l\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.522833      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-kjwxb\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.567972      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-5ksqb\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.618235      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-8pn84\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.669180      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-ddcx2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.722682      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-kvp4f\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.764710      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-57n9c\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.825519      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-xvr6x\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.872115      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-pdql9\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.915799      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-2bg59\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:57.971408      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-xntlb\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:58.036921      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-7bw8x\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:58.093722      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-k2sqk\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:58.117285      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-2004/simpletest.rc-lwvs5\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:19:59.114610      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-333/pod-f9198599-7a20-4bcc-87a4-fb7b75ce116a\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-hn24g\\\" not found. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:20:00.268135      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-6775/exec-volume-test-inlinevolume-kwzh\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:00.580681      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1978/pod-subpath-test-dynamicpv-ns29\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:03.117427      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-333/pod-f9198599-7a20-4bcc-87a4-fb7b75ce116a\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-hn24g\\\" not found. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:20:03.677702      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"subpath-4338/pod-subpath-test-configmap-s597\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:06.464379      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-7130/downwardapi-volume-39555012-cdd4-40ce-b99f-bde96dea81c9\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:08.501170      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-test-8583/bin-false6329980e-638b-47c4-9a9b-87f2a0040428\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:08.613771      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-8466/termination-message-container239b2e44-5529-4356-9159-c5dd99ae7d20\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:09.061346      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"events-463/send-events-9373615f-d816-4853-b2e1-f0e0844cb116\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:14.673284      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-61/security-context-9877fec3-6380-4486-be23-130a7a8d6624\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:17.228755      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-8684/busybox-fbe75627-3c4a-48c5-89b3-850023912457\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:17.572534      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-1263/ss2-2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:17.726057      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pvc-protection-3591/pvc-tester-8n5hv\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:17.833816      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5261/pod-subpath-test-inlinevolume-sszt\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:18.363642      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-expansion-6934/hostexec-nodes-us-central1-a-g3vq-ccmb6\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:18.607226      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5929/pod-subpath-test-dynamicpv-225f\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:18.821061      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-4154/suspend-false-to-true-v8vgf\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:18.833129      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-4154/suspend-false-to-true-jcsnf\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:20.202600      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-1263/ss2-1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:20.544086      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-7638/local-client\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:21.281757      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-5756/downwardapi-volume-465ce829-025b-4118-85a4-0c20765b6919\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:21.469690      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-9555/pod-050a7e5b-20f6-40ab-bff6-8e79c86fc980\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:22.956468      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-expansion-6934/pod-f8656fc8-9b7e-4008-a4cf-7349cd2f4199\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:20:23.193900      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replication-controller-8846/pod-adoption\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:24.117765      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-1254/downwardapi-volume-9719e0f8-333d-4437-ba52-21f2a065f0a0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:25.990349      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-8784/sample-webhook-deployment-5f8b6c9658-c2dhm\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:27.886079      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5335-5331/csi-mockplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:27.921953      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5335-5331/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:28.065140      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-1157/sample-webhook-deployment-5f8b6c9658-p8rp2\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:28.399102      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-1263/ss2-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:28.740722      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-728/pod-logs-websocket-ac599b50-770e-4017-9ef1-b0ef98959871\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:29.734785      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"pvc-protection-3591/pvc-tester-44ccw\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-protection7hvwm\\\" is being deleted. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:20:30.311403      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replication-controller-5857/rc-test-jxjfg\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:30.329597      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-1872/downwardapi-volume-0a49e2dd-efa8-4781-bf01-7a519f8b2298\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:31.048183      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-4955/pod-projected-secrets-f625467e-d1c9-49b8-b49a-0a78b29ec66b\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:31.145121      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"pvc-protection-3591/pvc-tester-44ccw\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-protection7hvwm\\\" is being deleted. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:20:31.687001      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1315/hostexec-nodes-us-central1-a-g3vq-ct4ql\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:33.221658      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-4547/annotationupdated164cb9f-d039-4c93-be0b-b21a5350e66d\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:33.945695      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-2286/hostexec-nodes-us-central1-a-gl7l-cjfhp\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:34.434831      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-9745/hostexec-nodes-us-central1-a-gl7l-d7vqp\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:34.832270      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-9823/hostexec-nodes-us-central1-a-pp7m-s5vtw\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:34.941960      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5335/pvc-volume-tester-phn8v\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:35.657706      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replication-controller-5857/rc-test-dx5k5\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:35.873024      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1315/pod-b282ed35-a99f-454a-aefd-93ecd1dba7f7\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:20:37.357791      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3686/hostexec-nodes-us-central1-a-hmlq-998dr\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:37.839895      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-845/security-context-8e3bd05e-3f04-4d9e-88b6-7c1fbac85686\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:38.367038      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-9622/test-pod\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:38.540852      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9767-1249/csi-mockplugin-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:39.064443      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-9823/pod-subpath-test-preprovisionedpv-tpfc\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:39.125107      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-7437/inline-volume-tester-6mccq\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:39.192769      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-7437-1428/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:39.464158      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-9692/ss-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:40.081825      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-2286/pod-240304dd-78a1-4346-9a00-7495a5e5e882\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:20:40.918137      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-9745/pod-d2475450-b3e4-424e-afaf-9863513d0ea6\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:20:42.114323      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-2286/hostexec-nodes-us-central1-a-gl7l-8m4b8\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:42.968607      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-9745/hostexec-nodes-us-central1-a-gl7l-n6629\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:44.129428      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-6478/dns-test-e97d229d-d2ff-4262-b2c2-461ce10163f2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:45.490740      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-6538/externalname-service-v6zds\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:45.491249      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-6538/externalname-service-75vs7\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:48.856552      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-8295/sample-webhook-deployment-5f8b6c9658-9jjrn\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:50.238306      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-7268/pod-c5fc84a7-867e-47a7-8c6a-883755691c47\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:51.516508      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-6538/execpodk5dwb\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:52.881049      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4734-47/csi-mockplugin-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:52.904104      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4734-47/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:54.335329      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-9860/busybox-user-65534-f1fd2a3e-0714-4f01-b532-911b1dd9f90e\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:54.504927      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/service-headless-tr29v\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:54.592101      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/service-headless-dkmzk\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:54.592804      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/service-headless-6s4w4\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:55.085727      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9767/pvc-volume-tester-xp54l\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:55.753924      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3686/pod-subpath-test-preprovisionedpv-7prg\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:56.343483      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5789/pod-submit-remove-2f94d682-caa9-4577-b500-f4ba1e918712\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:58.032905      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5848/hostexec-nodes-us-central1-a-g3vq-bk9mp\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:58.188080      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-8295/webhook-to-be-mutated\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:20:58.377565      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8305/hostexec-nodes-us-central1-a-pp7m-4m6r7\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:20:59.941903      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4734/pvc-volume-tester-d6g5b\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:00.145268      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-1742-8259/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:00.183003      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"crd-webhook-291/sample-crd-conversion-webhook-deployment-646fc49456-6cj2m\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:00.557576      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/service-headless-toggled-zg9tj\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:00.586328      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/service-headless-toggled-5c4df\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:00.588756      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/service-headless-toggled-md7px\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:00.593268      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-631/hostexec-nodes-us-central1-a-gl7l-kqlpt\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:00.690121      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-3008/startup-a5a3cceb-5950-4ca6-8b3a-1c838f62e72c\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:05.234140      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-1263/ss2-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:05.488529      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1350/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:21:05.502448      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1350/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:21:05.547865      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1350/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:21:05.577525      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1350/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:21:05.871095      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3686/pod-subpath-test-preprovisionedpv-7prg\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:06.216800      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-1742/pod-bb8aa01e-b5c5-4235-b21c-6bea57065a99\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:09.016879      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8305/pod-subpath-test-preprovisionedpv-dzr4\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:09.067126      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-631/pod-2056438b-f922-4be1-8915-69ab9cd492f0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:21:09.614857      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:09.847436      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-5073/inline-volume-tester-qbgxn\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:09.868614      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-5073-7708/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:10.266620      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5848/pod-subpath-test-preprovisionedpv-64tg\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:11.085523      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3092/pod-service-account-defaultsa\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:11.102038      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3092/pod-service-account-mountsa\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:11.102502      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3092/pod-service-account-nomountsa\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:11.120596      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3092/pod-service-account-defaultsa-mountspec\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:11.123127      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3092/pod-service-account-mountsa-mountspec\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:11.140586      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3092/pod-service-account-nomountsa-mountspec\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:11.150669      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3092/pod-service-account-defaultsa-nomountspec\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:11.153788      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3092/pod-service-account-mountsa-nomountspec\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nE0623 13:21:11.167531      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"pod-service-account-nomountsa-nomountspec\\\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"svcaccounts-3092/pod-service-account-nomountsa-nomountspec\"\nI0623 13:21:11.167598      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"svcaccounts-3092/pod-service-account-nomountsa-nomountspec\"\nE0623 13:21:11.168034      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"pod-service-account-nomountsa-nomountspec\\\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated\" pod=\"svcaccounts-3092/pod-service-account-nomountsa-nomountspec\"\nE0623 13:21:11.174974      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-service-account-defaultsa-nomountspec.16fb428f485da787\", GenerateName:\"\", Namespace:\"svcaccounts-3092\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"svcaccounts-3092\", Name:\"pod-service-account-defaultsa-nomountspec\", UID:\"df735cef-2c3a-4817-802e-d3ae307fe909\", APIVersion:\"v1\", ResourceVersion:\"18867\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned svcaccounts-3092/pod-service-account-defaultsa-nomountspec to nodes-us-central1-a-hmlq\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 21, 11, 150643079, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 21, 11, 150643079, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-service-account-defaultsa-nomountspec.16fb428f485da787\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated' (will not retry!)\nE0623 13:21:11.177134      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-service-account-mountsa-nomountspec.16fb428f488cfc24\", GenerateName:\"\", Namespace:\"svcaccounts-3092\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"svcaccounts-3092\", Name:\"pod-service-account-mountsa-nomountspec\", UID:\"aeb6050e-3159-4fea-adeb-684955c8980c\", APIVersion:\"v1\", ResourceVersion:\"18869\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned svcaccounts-3092/pod-service-account-mountsa-nomountspec to nodes-us-central1-a-gl7l\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 21, 11, 153744932, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 21, 11, 153744932, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-service-account-mountsa-nomountspec.16fb428f488cfc24\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated' (will not retry!)\nE0623 13:21:11.179225      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-service-account-nomountsa-nomountspec.16fb428f49682251\", GenerateName:\"\", Namespace:\"svcaccounts-3092\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"svcaccounts-3092\", Name:\"pod-service-account-nomountsa-nomountspec\", UID:\"c25a9259-d329-45cb-8306-6bc6cdb30748\", APIVersion:\"v1\", ResourceVersion:\"18872\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"pod-service-account-nomountsa-nomountspec\\\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 21, 11, 168107089, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 21, 11, 168107089, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-service-account-nomountsa-nomountspec.16fb428f49682251\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated' (will not retry!)\nI0623 13:21:11.468060      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-972-6116/csi-mockplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nE0623 13:21:12.187993      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"pod-service-account-nomountsa-nomountspec\\\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"svcaccounts-3092/pod-service-account-nomountsa-nomountspec\"\nI0623 13:21:12.188051      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"svcaccounts-3092/pod-service-account-nomountsa-nomountspec\"\nE0623 13:21:12.188104      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"pod-service-account-nomountsa-nomountspec\\\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated\" pod=\"svcaccounts-3092/pod-service-account-nomountsa-nomountspec\"\nE0623 13:21:12.193877      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-service-account-nomountsa-nomountspec.16fb428f49682251\", GenerateName:\"\", Namespace:\"svcaccounts-3092\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"svcaccounts-3092\", Name:\"pod-service-account-nomountsa-nomountspec\", UID:\"c25a9259-d329-45cb-8306-6bc6cdb30748\", APIVersion:\"v1\", ResourceVersion:\"18874\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"pod-service-account-nomountsa-nomountspec\\\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 21, 11, 168107089, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 21, 12, 188158514, time.Local), Count:2, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-service-account-nomountsa-nomountspec.16fb428f49682251\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated' (will not retry!)\nI0623 13:21:12.563839      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-8457/suspend-false-to-true-jnkdp\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:12.587999      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-8457/suspend-false-to-true-n947l\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:13.144814      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-631/hostexec-nodes-us-central1-a-gl7l-h8rbt\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:13.625957      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/verify-service-up-exec-pod-m98lz\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nE0623 13:21:15.187933      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"pod-service-account-nomountsa-nomountspec\\\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"svcaccounts-3092/pod-service-account-nomountsa-nomountspec\"\nI0623 13:21:15.188397      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"svcaccounts-3092/pod-service-account-nomountsa-nomountspec\"\nE0623 13:21:15.188639      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"pod-service-account-nomountsa-nomountspec\\\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated\" pod=\"svcaccounts-3092/pod-service-account-nomountsa-nomountspec\"\nE0623 13:21:15.194075      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-service-account-nomountsa-nomountspec.16fb428f49682251\", GenerateName:\"\", Namespace:\"svcaccounts-3092\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"svcaccounts-3092\", Name:\"pod-service-account-nomountsa-nomountspec\", UID:\"c25a9259-d329-45cb-8306-6bc6cdb30748\", APIVersion:\"v1\", ResourceVersion:\"18874\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"pod-service-account-nomountsa-nomountspec\\\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 21, 11, 168107089, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 21, 15, 188847917, time.Local), Count:3, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-service-account-nomountsa-nomountspec.16fb428f49682251\" is forbidden: unable to create new content in namespace svcaccounts-3092 because it is being terminated' (will not retry!)\nI0623 13:21:16.112029      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-test-4184/busybox-host-aliasesbf674a65-4db0-4566-acae-164c91669be7\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:16.583905      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-4043/hostexec-nodes-us-central1-a-gl7l-qcgtm\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:17.916686      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4770-3652/csi-mockplugin-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:17.944064      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4770-3652/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:19.024176      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-1263/ss2-1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:19.659823      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-1027-767/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:19.695056      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-5447/httpd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:20.623280      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-7151/image-pull-testd3f16149-ab82-496e-bb6f-6f0e4d0e5d22\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:22.923849      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-1533/inline-volume-fntlg\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-fntlg-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:21:23.076483      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-972/pvc-volume-tester-t47pm\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:24.270770      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-6848/downwardapi-volume-4332392e-568f-4625-8278-142b590649ae\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:24.678908      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/verify-service-down-host-exec-pod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:24.934747      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"e2e-privileged-pod-1466/privileged-pod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:25.328487      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-1533/inline-volume-tester-z9n54\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-z9n54-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:21:25.367933      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-1533-7852/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:26.628543      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"init-container-9886/pod-init-1e3ee4e3-5612-4669-b40d-fa66b2a926f1\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:27.609361      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-1263/ss2-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:27.638998      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1350/test-container-pod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:27.735239      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-3307/pod-projected-configmaps-0884378a-e87e-4ae6-8796-b36cfc9705d2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:28.227149      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-1533/inline-volume-tester-z9n54\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:28.641571      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"crictl-2545/hostexec-nodes-us-central1-a-g3vq-xkrm2\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:29.355170      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-7431/downwardapi-volume-4760e3c7-ec52-4c40-8348-bac97095d11f\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:30.757763      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"crictl-2545/hostexec-nodes-us-central1-a-gl7l-nwq5s\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:30.932718      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/verify-service-down-host-exec-pod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:32.367131      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-1084/hostexec-nodes-us-central1-a-g3vq-ht5pq\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:32.456079      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"csi-mock-volumes-4770/pvc-volume-tester-8sdtq\" err=\"0/5 nodes are available: 1 node(s) did not have enough free storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:21:33.355889      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-1533/inline-volume-tester2-2bbzp\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester2-2bbzp-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:21:34.034595      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2206/hostexec-nodes-us-central1-a-g3vq-5rvgs\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:34.209797      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"csi-mock-volumes-4770/pvc-volume-tester-8sdtq\" err=\"0/5 nodes are available: 1 node(s) did not have enough free storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 13:21:35.127839      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-1271/pod-qos-class-461dff74-f528-464e-ad0e-bfa95fd34cd5\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:35.826101      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-5952/pod1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:35.951490      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-4083/busybox-privileged-true-006fbe59-2a4c-4db2-b7ba-6525a59a1e6a\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:36.233645      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-1533/inline-volume-tester2-2bbzp\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:36.883343      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7022/inline-volume-82zkb\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-82zkb-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:21:37.703506      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-7043/pod-3138fea4-2590-473f-9469-a39ee2c04791\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:37.863719      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-5952/pod2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:38.898700      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"crictl-2545/hostexec-nodes-us-central1-a-hmlq-x7qt6\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:38.982119      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-4043/pod-subpath-test-preprovisionedpv-r4vd\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:39.236606      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7022/inline-volume-tester-2s2jw\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-2s2jw-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:21:39.256142      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-7022-9675/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:40.037180      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-892/explicit-root-uid\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:40.561149      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-1084/exec-volume-test-preprovisionedpv-hnlb\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:40.760577      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2206/pod-9819b30f-90cd-41dc-b92c-633281082a4e\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:21:41.040883      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"crictl-2545/hostexec-nodes-us-central1-a-pp7m-x5tw2\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:41.499904      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:42.067530      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-3243/hostexec-nodes-us-central1-a-pp7m-dtlnx\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:43.373003      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-6310/hostexec-nodes-us-central1-a-g3vq-dt424\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:43.913542      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-5952/execpods2xl8\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:44.490714      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-test-295/bin-false34048744-d983-4ff2-911e-b4b465fb1333\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:45.540897      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/verify-service-up-exec-pod-9mgkw\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:46.094353      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-2512/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:21:46.116945      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-2512/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:21:46.152884      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-2512/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:21:46.192613      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-2512/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:21:47.360141      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-7022/inline-volume-tester-2s2jw\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:48.567414      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-1176/pod-configmaps-7c2cc72a-a35d-477f-ae29-4c07001a9b78\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:49.111332      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9123/hostexec-nodes-us-central1-a-g3vq-qdsfz\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:49.796712      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1316/pod-subpath-test-inlinevolume-7s4t\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:49.909224      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"runtimeclass-9374/test-runtimeclass-runtimeclass-9374-unconfigured-handler-gw2l8\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:51.782895      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-916/hostexec-nodes-us-central1-a-hmlq-z9pb2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:52.236056      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replicaset-9736/my-hostname-private-3f174257-019d-4059-a6fe-36c5cf8dd9dc-fg2cl\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:52.575736      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-7145/pod-d47ff207-8cd3-4141-8d37-7d46d5d19eae\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:52.606832      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3502/verify-service-down-host-exec-pod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:52.923041      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"port-forwarding-8041/pfpod\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:53.298615      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-6945/pod-81f229d6-82b8-4512-85d1-469cdd67c513\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:54.022423      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-6310/exec-volume-test-preprovisionedpv-nq2m\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:54.300676      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-3243/local-injector\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:21:54.565060      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1693/pod1\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:54.723090      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-4282/security-context-c3415351-4e95-44e3-adbb-2c3824f001dc\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:21:59.038111      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"proxy-9969/proxy-service-bdxdv-n4hz5\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:00.168987      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-7463/hostexec-nodes-us-central1-a-gl7l-hzxkv\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:00.611783      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1693/execpod5562c\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:00.869280      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-5438/pod-projected-secrets-e7913b18-3b31-4fe6-a1bc-966e90fc328e\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:01.397373      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-8188/sample-webhook-deployment-5f8b6c9658-7k7pd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:01.812169      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-7107/pause\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:04.892365      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-6068/pod-projected-configmaps-c8f4d902-8e8a-4735-8dae-5219caf06b29\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:06.916820      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-3243/local-client\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:07.052612      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8060/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:07.064107      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8060/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:07.085306      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8060/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:07.092922      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8060/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:08.104824      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1693/pod2\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:08.342700      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-2512/test-container-pod\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:08.380749      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-2512/host-test-container-pod\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:08.755434      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4398-7189/csi-mockplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:08.849236      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4398-7189/csi-mockplugin-resizer-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:09.369417      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9123/pod-dcec6d64-5073-48e6-b94d-32cc9a8bf523\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:09.892812      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-1285/sample-webhook-deployment-5f8b6c9658-xnmbg\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:10.109785      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-916/pod-subpath-test-preprovisionedpv-94x5\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:10.428812      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-7463/pod-49f3aabe-c7ca-4a84-9ace-d173a5754c1e\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:10.740823      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3301-517/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:10.813606      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3301/pod-51f0d807-d80a-4de7-96ae-1c63ec44f5a4\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:22:11.078235      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-3047/pod-should-be-evicted7d29e26c-1dcc-44c9-88c0-82cde89b01d0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:11.295073      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-7343/e2e-bn96s-nc9gf\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:11.327542      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-7343/e2e-bn96s-vf5w4\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:11.950840      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-3197/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:11.996700      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-3197/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:12.036807      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-3197/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:12.152673      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-3197/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:13.511749      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-8469/pod-configmaps-d27d6098-899a-4f85-be28-18a65b7aedd2\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:13.576551      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-380/pod-273f3455-bb26-4051-abbb-e026cc5fae39\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:14.104760      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3301/pod-51f0d807-d80a-4de7-96ae-1c63ec44f5a4\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:22:14.516828      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-7463/hostexec-nodes-us-central1-a-gl7l-2m9bn\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:15.903932      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-1654/termination-message-container176c3fa8-52d4-4999-8f01-f9d92c101c59\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:16.192361      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-1491/security-context-2521a6f4-c56e-4b9a-9987-555e6a924d8b\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:16.269012      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3301/pod-51f0d807-d80a-4de7-96ae-1c63ec44f5a4\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:22:17.960449      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7024/hostexec-nodes-us-central1-a-gl7l-rm6t8\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:18.528152      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-2643/hostexec-nodes-us-central1-a-gl7l-rgvqw\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:18.819528      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-1285/to-be-attached-pod\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:19.137874      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"port-forwarding-2777/pfpod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:19.159276      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1086/pod-subpath-test-inlinevolume-w768\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:19.757574      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5359/hostexec-nodes-us-central1-a-pp7m-vqbqc\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:20.280189      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3301/pod-51f0d807-d80a-4de7-96ae-1c63ec44f5a4\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:20.340804      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4398/pvc-volume-tester-nb6nb\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:21.518695      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-4079-1763/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:21.820868      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-8199/pod-secrets-bdcfab22-6f9d-46d0-b061-e1e9287f704b\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:22.043430      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-6607/pod-configmaps-7d8e92a1-c344-426f-9a5c-a5801459aa1e\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:22.251563      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-7343/e2e-bn96s-p9tvj\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:22.268615      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-7343/e2e-bn96s-ptjzd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:24.350354      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1356/pod-subpath-test-inlinevolume-7kx6\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:25.434427      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-98/hostexec-nodes-us-central1-a-hmlq-h784h\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:25.850300      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-7412/hostexec-nodes-us-central1-a-gl7l-wtf7t\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:28.168957      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1670/externalsvc-w89pw\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:28.188056      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1670/externalsvc-kg2kp\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:30.155606      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-8546/pod-configmaps-c89da5e0-887d-4c34-a400-4675c03670b0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:30.312440      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-3566/exceed-active-deadline-4kvgw\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:30.332813      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-3566/exceed-active-deadline-fkdmv\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:31.391354      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-834/downwardapi-volume-8ee02a52-d913-4ef8-9ba1-ab6e20c7a3b7\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:32.450568      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-6038/netserver-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:32.462961      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-6038/netserver-1\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:32.469818      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-6038/netserver-2\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:32.485297      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-6038/netserver-3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:32.718227      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-7678/httpd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:33.469079      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-9542/pod-projected-secrets-3517e371-4a62-4335-b23a-0d9d3113c657\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:33.567891      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-4079/hostpath-injector\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:34.895482      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3301/pod-fe517638-5cab-4af7-a6c4-6d5f219f4767\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:22:36.284827      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3301/pod-fe517638-5cab-4af7-a6c4-6d5f219f4767\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:22:37.133720      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-8060/test-container-pod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:37.234321      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1670/execpodlw2fd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:38.193239      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-3197/test-container-pod\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:38.718391      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3301/pod-fe517638-5cab-4af7-a6c4-6d5f219f4767\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:22:39.685691      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-98/pod-subpath-test-preprovisionedpv-lg69\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:39.813971      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-9412-7994/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:39.918940      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5359/pod-subpath-test-preprovisionedpv-wvrs\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:40.204156      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7024/pod-subpath-test-preprovisionedpv-trvw\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:40.286422      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"var-expansion-5827/var-expansion-bed93462-b4bd-43f6-a13b-32812c6b7b39\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:40.324731      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-7412/local-injector\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:40.401835      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-8178/pod-configmaps-e9c00585-2db6-4818-91c4-5d3ec8f021b3\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:40.743333      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-2643/local-injector\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:41.903970      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2549/pod-subpath-test-inlinevolume-2z2q\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:43.295614      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3301/pod-fe517638-5cab-4af7-a6c4-6d5f219f4767\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:43.907860      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5616/hostexec-nodes-us-central1-a-hmlq-qxqq8\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:44.638867      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-test-7353/busybox-scheduling-a62c0120-38b1-48c2-8725-840987377902\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:46.220269      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8783-6835/csi-mockplugin-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:46.671805      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2146/hostexec-nodes-us-central1-a-g3vq-5d7kv\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:49.868829      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-9412/pod-378811d7-3f84-44c2-86f0-561ecf85eeed\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:22:50.818018      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1404/hostexec-nodes-us-central1-a-g3vq-5ds4v\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:54.084757      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-9774/hostexec-nodes-us-central1-a-g3vq-8d2jj\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:54.104139      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-973/hostexec-nodes-us-central1-a-hmlq-t24r5\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:54.170459      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5616/pod-subpath-test-preprovisionedpv-xbs4\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:54.902466      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2146/pod-subpath-test-preprovisionedpv-v6zd\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:56.548110      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-4079/hostpath-client\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:56.951784      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-7412/local-client\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:57.776674      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8783/pvc-volume-tester-sg5j4\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:57.903940      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-9412/hostexec-nodes-us-central1-a-gl7l-5bh6j\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:22:58.544508      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-6038/test-container-pod\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:22:59.462890      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-2643/local-client\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:00.151295      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"cronjob-1475/concurrent-27599843-ll5jd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:00.159485      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"cronjob-2698/replace-27599843-snxrv\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:00.564736      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-4222/deployment-shared-unset-5488fbb544-rcxsc\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nE0623 13:23:00.572051      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-shared-unset-5488fbb544-rcxsc.16fb42a8c1f3c4a7\", GenerateName:\"\", Namespace:\"apply-4222\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"apply-4222\", Name:\"deployment-shared-unset-5488fbb544-rcxsc\", UID:\"1d51d458-5912-41ef-b88c-8e2825b6ece1\", APIVersion:\"v1\", ResourceVersion:\"23333\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned apply-4222/deployment-shared-unset-5488fbb544-rcxsc to nodes-us-central1-a-pp7m\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 23, 0, 564706471, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 23, 0, 564706471, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-shared-unset-5488fbb544-rcxsc.16fb42a8c1f3c4a7\" is forbidden: unable to create new content in namespace apply-4222 because it is being terminated' (will not retry!)\nE0623 13:23:00.572375      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"deployment-shared-unset-5488fbb544-c6f86\\\" is forbidden: unable to create new content in namespace apply-4222 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"apply-4222/deployment-shared-unset-5488fbb544-c6f86\"\nI0623 13:23:00.572431      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"apply-4222/deployment-shared-unset-5488fbb544-c6f86\"\nE0623 13:23:00.572510      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"deployment-shared-unset-5488fbb544-c6f86\\\" is forbidden: unable to create new content in namespace apply-4222 because it is being terminated\" pod=\"apply-4222/deployment-shared-unset-5488fbb544-c6f86\"\nE0623 13:23:00.573155      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"deployment-shared-unset-5488fbb544-7fgs2\\\" is forbidden: unable to create new content in namespace apply-4222 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"apply-4222/deployment-shared-unset-5488fbb544-7fgs2\"\nI0623 13:23:00.573191      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"apply-4222/deployment-shared-unset-5488fbb544-7fgs2\"\nE0623 13:23:00.573257      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"deployment-shared-unset-5488fbb544-7fgs2\\\" is forbidden: unable to create new content in namespace apply-4222 because it is being terminated\" pod=\"apply-4222/deployment-shared-unset-5488fbb544-7fgs2\"\nE0623 13:23:00.599901      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-shared-unset-5488fbb544-c6f86.16fb42a8c26bb1ac\", GenerateName:\"\", Namespace:\"apply-4222\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"apply-4222\", Name:\"deployment-shared-unset-5488fbb544-c6f86\", UID:\"a06cc9fc-a124-4334-bda0-babddbc177b1\", APIVersion:\"v1\", ResourceVersion:\"23337\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"deployment-shared-unset-5488fbb544-c6f86\\\" is forbidden: unable to create new content in namespace apply-4222 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 23, 0, 572565932, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 23, 0, 572565932, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-shared-unset-5488fbb544-c6f86.16fb42a8c26bb1ac\" is forbidden: unable to create new content in namespace apply-4222 because it is being terminated' (will not retry!)\nE0623 13:23:00.604827      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-shared-unset-5488fbb544-7fgs2.16fb42a8c276dfbe\", GenerateName:\"\", Namespace:\"apply-4222\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"apply-4222\", Name:\"deployment-shared-unset-5488fbb544-7fgs2\", UID:\"e234e8c2-9269-4fae-b982-67040e6e47a6\", APIVersion:\"v1\", ResourceVersion:\"23335\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"deployment-shared-unset-5488fbb544-7fgs2\\\" is forbidden: unable to create new content in namespace apply-4222 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 23, 0, 573298622, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 23, 0, 573298622, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-shared-unset-5488fbb544-7fgs2.16fb42a8c276dfbe\" is forbidden: unable to create new content in namespace apply-4222 because it is being terminated' (will not retry!)\nI0623 13:23:00.703462      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7985/pod-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:00.723029      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7985/pod-1\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:00.727285      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-6805/downwardapi-volume-934a7918-6a5c-4e2d-a4dc-4d08c0bd3bf3\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:00.896229      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-7085/pod-fd929312-59bc-4765-bd94-05e5ef32c5dd\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:04.563878      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1980/hostexec-nodes-us-central1-a-hmlq-n8cdz\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:05.511099      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-4817/adopt-release-kht8n\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:05.527890      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-4817/adopt-release-z4g7x\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:06.409353      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-4101-5145/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:08.200477      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-3273/pod-projected-configmaps-09c505fa-8b67-473e-b69c-03e88e540878\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:08.466063      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pv-1097/pod-ephm-test-projected-2cxm\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:08.882720      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-7834/downward-api-1537bf68-a678-460b-ae32-8dde00ab01db\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:09.005978      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1404/pod-subpath-test-preprovisionedpv-bgsk\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:09.057286      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4381/hostexec-nodes-us-central1-a-g3vq-ffds9\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:10.357323      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-973/pod-subpath-test-preprovisionedpv-kvpz\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:10.400172      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-9774/pod-subpath-test-preprovisionedpv-9hlc\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:11.134159      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3150/rs-b5jjs\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:11.173131      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3150/rs-stb9x\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:11.183664      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3150/rs-5mvsm\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:11.196975      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3150/rs-fjm28\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:11.222831      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3150/rs-njr2z\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:11.222989      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3150/rs-wjlrx\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:11.223033      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3150/rs-hnvk6\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:11.223305      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3150/rs-kgvlz\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:11.239216      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3150/rs-ztpks\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:11.270591      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3150/rs-ptkwp\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:11.375810      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-7425/busybox-d701053a-1395-4044-8089-a57c7db54bd7\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:11.594988      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-4817/adopt-release-hsfqt\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:12.239974      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-7974/pod-configmaps-6ae56e0e-5984-4947-a2dc-32c7e1746cca\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:12.440736      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-4101/hostpath-injector\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:12.730526      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-3502/termination-message-container6ea7c1be-d976-4a82-ae3a-c99a0d919c77\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:13.138056      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9685-1572/csi-mockplugin-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:13.172775      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9685-1572/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:13.212662      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9685-1572/csi-mockplugin-resizer-0\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:16.316827      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4132/pod-test\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:16.480677      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-6835/sample-webhook-deployment-5f8b6c9658-4m4rh\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:17.229876      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4381/pod-5811a623-d355-4aae-bfd0-a4be57aa5ad6\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:23:17.467650      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-8530/agnhost-primary-m8zql\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nE0623 13:23:17.687778      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"agnhost-primary-ht8db\\\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"kubectl-8530/agnhost-primary-ht8db\"\nI0623 13:23:17.687844      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"kubectl-8530/agnhost-primary-ht8db\"\nE0623 13:23:17.687898      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-ht8db\\\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated\" pod=\"kubectl-8530/agnhost-primary-ht8db\"\nE0623 13:23:17.707728      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"agnhost-primary-ht8db.16fb42acbe941d4b\", GenerateName:\"\", Namespace:\"kubectl-8530\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"kubectl-8530\", Name:\"agnhost-primary-ht8db\", UID:\"baf1a0a4-a2d2-4277-958a-29b42c98e61f\", APIVersion:\"v1\", ResourceVersion:\"24111\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-ht8db\\\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 23, 17, 687975243, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 23, 17, 687975243, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"agnhost-primary-ht8db.16fb42acbe941d4b\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated' (will not retry!)\nI0623 13:23:17.824510      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"var-expansion-5649/var-expansion-80b1d9e3-2ea5-41e0-9ff0-ee5abfdc3850\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:18.709408      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-6287/kube-proxy-mode-detector\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nE0623 13:23:19.328979      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"agnhost-primary-ht8db\\\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"kubectl-8530/agnhost-primary-ht8db\"\nI0623 13:23:19.329022      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"kubectl-8530/agnhost-primary-ht8db\"\nE0623 13:23:19.329267      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-ht8db\\\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated\" pod=\"kubectl-8530/agnhost-primary-ht8db\"\nE0623 13:23:19.334293      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"agnhost-primary-ht8db.16fb42acbe941d4b\", GenerateName:\"\", Namespace:\"kubectl-8530\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"kubectl-8530\", Name:\"agnhost-primary-ht8db\", UID:\"baf1a0a4-a2d2-4277-958a-29b42c98e61f\", APIVersion:\"v1\", ResourceVersion:\"24114\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-ht8db\\\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 23, 17, 687975243, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 23, 19, 329393671, time.Local), Count:2, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"agnhost-primary-ht8db.16fb42acbe941d4b\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated' (will not retry!)\nE0623 13:23:22.340694      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"agnhost-primary-ht8db\\\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"kubectl-8530/agnhost-primary-ht8db\"\nI0623 13:23:22.340889      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"kubectl-8530/agnhost-primary-ht8db\"\nE0623 13:23:22.341065      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-ht8db\\\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated\" pod=\"kubectl-8530/agnhost-primary-ht8db\"\nE0623 13:23:22.369987      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"agnhost-primary-ht8db.16fb42acbe941d4b\", GenerateName:\"\", Namespace:\"kubectl-8530\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"kubectl-8530\", Name:\"agnhost-primary-ht8db\", UID:\"baf1a0a4-a2d2-4277-958a-29b42c98e61f\", APIVersion:\"v1\", ResourceVersion:\"24114\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-ht8db\\\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 13, 23, 17, 687975243, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 13, 23, 22, 341249509, time.Local), Count:3, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"agnhost-primary-ht8db.16fb42acbe941d4b\" is forbidden: unable to create new content in namespace kubectl-8530 because it is being terminated' (will not retry!)\nI0623 13:23:22.873081      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7922/inline-volume-7fmtp\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-7fmtp-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:23:24.264652      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5976-3277/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:24.294625      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5976-3277/csi-mockplugin-0\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:24.722066      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9685/pvc-volume-tester-6l6jw\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:24.831249      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1980/pod-subpath-test-preprovisionedpv-wsfr\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:24.943062      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-6287/hostexec-nodes-us-central1-a-g3vq-trfmq\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:25.218754      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"conntrack-3638/boom-server\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:25.298605      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7922/inline-volume-tester-pwb69\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-pwb69-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 13:23:25.343029      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-7922-6721/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=1 feasibleNodes=1\nI0623 13:23:25.570052      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4381/pod-a9f939a3-7300-4ed3-80d3-819577f2e9a6\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=1\nI0623 13:23:25.904122      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-r4rbw\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:25.914718      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-x4msk\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:25.936693      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-bs7r7\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:25.955213      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-jrjck\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:25.955754      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-dgkxw\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:26.011550      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-76fjg\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:26.011710      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-fsn4z\" node=\"nodes-us-central1-a-g3vq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:26.011786      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-wnw6m\" node=\"nodes-us-central1-a-hmlq\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:26.011962      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-tw472\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:26.051865      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-j65ts\" node=\"nodes-us-central1-a-gl7l\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:26.068991      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-g972b\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:26.096879      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-xrgfp\" node=\"nodes-us-central1-a-pp7m\" evaluatedNodes=5 feasibleNodes=4\nI0623 13:23:26.096973      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-2021/cleanup40-8137cd71-21d7-4d25-a247-516c5e56be1f-58bnl\" node=\"nodes-us-cent