This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2022-06-23 07:04
Elapsed39m58s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 183 lines ...
Updating project ssh metadata...
..............................................Updated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha].
.done.
WARNING: No host aliases were added to your SSH configs because you do not have any running instances. Try running this command again after running some instances.
I0623 07:05:32.120096    5950 up.go:44] Cleaning up any leaked resources from previous cluster
I0623 07:05:32.120268    5950 dumplogs.go:45] /logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kops toolbox dump --name e2e-e2e-kops-gce-stable.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh550045927/key --ssh-user prow
W0623 07:05:32.312001    5950 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0623 07:05:32.312053    5950 down.go:48] /logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kops delete cluster --name e2e-e2e-kops-gce-stable.k8s.local --yes
I0623 07:05:32.333032    5996 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 07:05:32.333129    5996 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-gce-stable.k8s.local" not found
I0623 07:05:32.434330    5950 gcs.go:51] gsutil ls -b -p k8s-jkns-e2e-gce-alpha gs://k8s-jkns-e2e-gce-alpha-state-9e
I0623 07:05:33.927345    5950 gcs.go:70] gsutil mb -p k8s-jkns-e2e-gce-alpha gs://k8s-jkns-e2e-gce-alpha-state-9e
Creating gs://k8s-jkns-e2e-gce-alpha-state-9e/...
I0623 07:05:35.901096    5950 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/06/23 07:05:35 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0623 07:05:35.908704    5950 http.go:37] curl https://ip.jsb.workers.dev
I0623 07:05:35.997261    5950 up.go:159] /logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kops create cluster --name e2e-e2e-kops-gce-stable.k8s.local --cloud gce --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.25.0-alpha.1 --ssh-public-key /tmp/kops-ssh550045927/key.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --channel=alpha --gce-service-account=default --admin-access 35.222.229.21/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-central1-a --master-size e2-standard-2 --project k8s-jkns-e2e-gce-alpha
I0623 07:05:36.017320    6288 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 07:05:36.017995    6288 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
I0623 07:05:36.043666    6288 create_cluster.go:862] Using SSH public key: /tmp/kops-ssh550045927/key.pub
I0623 07:05:36.205689    6288 new_cluster.go:425] VMs will be configured to use specified Service Account: default
... skipping 375 lines ...
I0623 07:05:43.761719    6309 keypair.go:225] Issuing new certificate: "etcd-manager-ca-events"
I0623 07:05:43.763119    6309 keypair.go:225] Issuing new certificate: "etcd-peers-ca-main"
W0623 07:05:43.858881    6309 vfs_castore.go:379] CA private key was not found
I0623 07:05:43.952857    6309 keypair.go:225] Issuing new certificate: "service-account"
I0623 07:05:43.956427    6309 keypair.go:225] Issuing new certificate: "kubernetes-ca"
I0623 07:05:56.509302    6309 executor.go:111] Tasks: 42 done / 68 total; 20 can run
W0623 07:06:09.389152    6309 executor.go:139] error running task "ForwardingRule/api-e2e-e2e-kops-gce-stable-k8s-local" (9m47s remaining to succeed): error creating ForwardingRule "api-e2e-e2e-kops-gce-stable-k8s-local": googleapi: Error 400: The resource 'projects/k8s-jkns-e2e-gce-alpha/regions/us-central1/targetPools/api-e2e-e2e-kops-gce-stable-k8s-local' is not ready, resourceNotReady
I0623 07:06:09.389218    6309 executor.go:111] Tasks: 61 done / 68 total; 5 can run
I0623 07:06:16.195028    6309 executor.go:111] Tasks: 66 done / 68 total; 2 can run
I0623 07:06:26.998497    6309 executor.go:111] Tasks: 68 done / 68 total; 0 can run
I0623 07:06:27.046646    6309 update_cluster.go:326] Exporting kubeconfig for cluster
kOps has set your kubectl context to e2e-e2e-kops-gce-stable.k8s.local

... skipping 8 lines ...

I0623 07:06:37.492318    5950 up.go:243] /logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kops validate cluster --name e2e-e2e-kops-gce-stable.k8s.local --count 10 --wait 15m0s
I0623 07:06:37.514776    6327 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 07:06:37.514883    6327 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-e2e-kops-gce-stable.k8s.local

W0623 07:07:07.950829    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": dial tcp 35.225.255.125:443: i/o timeout
W0623 07:07:33.363256    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": dial tcp 35.225.255.125:443: connect: connection refused
W0623 07:07:43.369492    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": dial tcp 35.225.255.125:443: connect: connection refused
W0623 07:07:53.372483    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": dial tcp 35.225.255.125:443: connect: connection refused
W0623 07:08:03.375638    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": dial tcp 35.225.255.125:443: connect: connection refused
W0623 07:08:13.379405    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": dial tcp 35.225.255.125:443: connect: connection refused
W0623 07:08:23.388280    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": dial tcp 35.225.255.125:443: connect: connection refused
W0623 07:08:33.391999    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": dial tcp 35.225.255.125:443: connect: connection refused
W0623 07:08:43.396688    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": dial tcp 35.225.255.125:443: connect: connection refused
W0623 07:08:53.405696    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": dial tcp 35.225.255.125:443: connect: connection refused
W0623 07:09:03.409518    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": dial tcp 35.225.255.125:443: connect: connection refused
W0623 07:09:23.414106    6327 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://35.225.255.125/api/v1/nodes": net/http: TLS handshake timeout
I0623 07:09:35.699582    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1

... skipping 5 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/master-us-central1-a-587c	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/master-us-central1-a-587c" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-50vm	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-50vm" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-m5w1	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-m5w1" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-nk1s	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-nk1s" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-tdxw	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-tdxw" has not yet joined cluster

Validation Failed
W0623 07:09:36.328812    6327 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 07:09:46.972112    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 6 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/master-us-central1-a-587c	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/master-us-central1-a-587c" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-50vm	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-50vm" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-m5w1	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-m5w1" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-nk1s	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-nk1s" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-tdxw	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-tdxw" has not yet joined cluster

Validation Failed
W0623 07:09:47.626808    6327 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 07:09:57.965413    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 6 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/master-us-central1-a-587c	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/master-us-central1-a-587c" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-50vm	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-50vm" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-m5w1	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-m5w1" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-nk1s	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-nk1s" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-tdxw	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-tdxw" has not yet joined cluster

Validation Failed
W0623 07:09:58.606416    6327 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 07:10:09.129811    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 6 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/master-us-central1-a-587c	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/master-us-central1-a-587c" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-50vm	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-50vm" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-m5w1	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-m5w1" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-nk1s	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-nk1s" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-tdxw	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-tdxw" has not yet joined cluster

Validation Failed
W0623 07:10:09.714598    6327 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 07:10:20.163814    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 11 lines ...
Pod	kube-system/cloud-controller-manager-8jnwg											system-cluster-critical pod "cloud-controller-manager-8jnwg" is pending
Pod	kube-system/coredns-autoscaler-5d4dbc7b59-786l5											system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-786l5" is pending
Pod	kube-system/coredns-dd657c749-zwb7q												system-cluster-critical pod "coredns-dd657c749-zwb7q" is pending
Pod	kube-system/dns-controller-78bc9bdd66-n6xk8											system-cluster-critical pod "dns-controller-78bc9bdd66-n6xk8" is pending
Pod	kube-system/kops-controller-nsdnd												system-cluster-critical pod "kops-controller-nsdnd" is pending

Validation Failed
W0623 07:10:20.821420    6327 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 07:10:31.214502    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 8 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-m5w1	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-m5w1" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-nk1s	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-nk1s" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-tdxw	machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-alpha/zones/us-central1-a/instances/nodes-us-central1-a-tdxw" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-5d4dbc7b59-786l5											system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-786l5" is pending
Pod	kube-system/coredns-dd657c749-zwb7q												system-cluster-critical pod "coredns-dd657c749-zwb7q" is pending

Validation Failed
W0623 07:10:31.863544    6327 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 07:10:42.210889    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 13 lines ...
Pod	kube-system/coredns-dd657c749-zwb7q												system-cluster-critical pod "coredns-dd657c749-zwb7q" is pending
Pod	kube-system/etcd-manager-main-master-us-central1-a-587c										system-cluster-critical pod "etcd-manager-main-master-us-central1-a-587c" is pending
Pod	kube-system/kube-apiserver-master-us-central1-a-587c										system-cluster-critical pod "kube-apiserver-master-us-central1-a-587c" is pending
Pod	kube-system/kube-proxy-master-us-central1-a-587c										system-node-critical pod "kube-proxy-master-us-central1-a-587c" is pending
Pod	kube-system/metadata-proxy-v0.12-kgcqd												system-node-critical pod "metadata-proxy-v0.12-kgcqd" is pending

Validation Failed
W0623 07:10:42.860016    6327 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 07:10:53.379230    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 16 lines ...
Pod	kube-system/coredns-dd657c749-zwb7q		system-cluster-critical pod "coredns-dd657c749-zwb7q" is pending
Pod	kube-system/metadata-proxy-v0.12-7tn48		system-node-critical pod "metadata-proxy-v0.12-7tn48" is pending
Pod	kube-system/metadata-proxy-v0.12-lxn6n		system-node-critical pod "metadata-proxy-v0.12-lxn6n" is pending
Pod	kube-system/metadata-proxy-v0.12-q7vth		system-node-critical pod "metadata-proxy-v0.12-q7vth" is pending
Pod	kube-system/metadata-proxy-v0.12-z8xdd		system-node-critical pod "metadata-proxy-v0.12-z8xdd" is pending

Validation Failed
W0623 07:10:54.053250    6327 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 07:11:04.487988    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 7 lines ...
nodes-us-central1-a-tdxw	node	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-dd657c749-6225l	system-cluster-critical pod "coredns-dd657c749-6225l" is pending

Validation Failed
W0623 07:11:05.111180    6327 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 07:11:15.664403    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 71 lines ...
nodes-us-central1-a-tdxw	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-nodes-us-central1-a-nk1s	system-node-critical pod "kube-proxy-nodes-us-central1-a-nk1s" is pending

Validation Failed
W0623 07:12:00.655375    6327 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 07:12:11.121852    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 7 lines ...
nodes-us-central1-a-tdxw	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-nodes-us-central1-a-50vm	system-node-critical pod "kube-proxy-nodes-us-central1-a-50vm" is pending

Validation Failed
W0623 07:12:11.701828    6327 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 07:12:22.058735    6327 gce_cloud.go:295] Scanning zones: [us-central1-c us-central1-a us-central1-f us-central1-b us-central1-d]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-central1-a	Master	e2-standard-2	1	1	us-central1
nodes-us-central1-a	Node	n1-standard-2	4	4	us-central1
... skipping 183 lines ...
===================================
Random Seed: 1655968460 - Will randomize all specs
Will run 7042 specs

Running in parallel across 25 nodes

Jun 23 07:14:35.380: INFO: lookupDiskImageSources: gcloud error with [[]string{"instance-groups", "list-instances", "", "--format=get(instance)"}]; err:exit status 1
Jun 23 07:14:35.380: INFO:  > ERROR: (gcloud.compute.instance-groups.list-instances) could not parse resource []
Jun 23 07:14:35.380: INFO:  > 
Jun 23 07:14:35.380: INFO: Cluster image sources lookup failed: exit status 1

Jun 23 07:14:35.380: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 07:14:35.381: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun 23 07:14:35.398: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun 23 07:14:35.426: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jun 23 07:14:35.426: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
... skipping 1340 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:14:36.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8455" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:14:36.304: INFO: Only supported for providers [azure] (not gce)
... skipping 106 lines ...
STEP: Destroying namespace "services-8523" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":5,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Ingress API
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 27 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:14:37.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-8774" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:14:37.775: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 58 lines ...
test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  test/e2e/kubectl/portforward.go:454
    should support forwarding over websockets
    test/e2e/kubectl/portforward.go:470
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:14:47.739: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/framework/framework.go:187

... skipping 46 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:14:35.763: INFO: Waiting up to 5m0s for pod "downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811" in namespace "projected-2396" to be "Succeeded or Failed"
Jun 23 07:14:35.791: INFO: Pod "downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811": Phase="Pending", Reason="", readiness=false. Elapsed: 27.608638ms
Jun 23 07:14:37.811: INFO: Pod "downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047849393s
Jun 23 07:14:39.795: INFO: Pod "downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032033999s
Jun 23 07:14:41.800: INFO: Pod "downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036641609s
Jun 23 07:14:43.797: INFO: Pod "downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811": Phase="Running", Reason="", readiness=true. Elapsed: 8.033234993s
Jun 23 07:14:45.801: INFO: Pod "downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811": Phase="Running", Reason="", readiness=false. Elapsed: 10.037317014s
Jun 23 07:14:47.797: INFO: Pod "downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.033577988s
STEP: Saw pod success
Jun 23 07:14:47.797: INFO: Pod "downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811" satisfied condition "Succeeded or Failed"
Jun 23 07:14:47.800: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811 container client-container: <nil>
STEP: delete the pod
Jun 23 07:14:47.843: INFO: Waiting for pod downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811 to disappear
Jun 23 07:14:47.847: INFO: Pod downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:12.309 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:14:47.874: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 23 lines ...
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  test/e2e/storage/testsuites/subpath.go:397
Jun 23 07:14:35.711: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 07:14:35.774: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7978" in namespace "provisioning-7978" to be "Succeeded or Failed"
Jun 23 07:14:35.804: INFO: Pod "hostpath-symlink-prep-provisioning-7978": Phase="Pending", Reason="", readiness=false. Elapsed: 29.409115ms
Jun 23 07:14:37.812: INFO: Pod "hostpath-symlink-prep-provisioning-7978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036904265s
Jun 23 07:14:39.810: INFO: Pod "hostpath-symlink-prep-provisioning-7978": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034806035s
Jun 23 07:14:41.809: INFO: Pod "hostpath-symlink-prep-provisioning-7978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034351471s
STEP: Saw pod success
Jun 23 07:14:41.809: INFO: Pod "hostpath-symlink-prep-provisioning-7978" satisfied condition "Succeeded or Failed"
Jun 23 07:14:41.809: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7978" in namespace "provisioning-7978"
Jun 23 07:14:41.821: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7978" to be fully deleted
Jun 23 07:14:41.833: INFO: Creating resource for inline volume
Jun 23 07:14:41.833: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Jun 23 07:14:41.834: INFO: Deleting pod "pod-subpath-test-inlinevolume-jzh4" in namespace "provisioning-7978"
Jun 23 07:14:41.862: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7978" in namespace "provisioning-7978" to be "Succeeded or Failed"
Jun 23 07:14:41.894: INFO: Pod "hostpath-symlink-prep-provisioning-7978": Phase="Pending", Reason="", readiness=false. Elapsed: 31.907276ms
Jun 23 07:14:43.899: INFO: Pod "hostpath-symlink-prep-provisioning-7978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03719684s
Jun 23 07:14:45.898: INFO: Pod "hostpath-symlink-prep-provisioning-7978": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036229884s
Jun 23 07:14:47.904: INFO: Pod "hostpath-symlink-prep-provisioning-7978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042298083s
STEP: Saw pod success
Jun 23 07:14:47.904: INFO: Pod "hostpath-symlink-prep-provisioning-7978" satisfied condition "Succeeded or Failed"
Jun 23 07:14:47.904: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7978" in namespace "provisioning-7978"
Jun 23 07:14:47.916: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7978" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187
Jun 23 07:14:47.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7978" for this suite.
... skipping 33 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:14:48.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6587" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:14:48.052: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/framework/framework.go:187

... skipping 64 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  test/e2e/common/node/security_context.go:337
Jun 23 07:14:36.838: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-87c3b6bb-ea72-4954-9360-5ea60843dd35" in namespace "security-context-test-6858" to be "Succeeded or Failed"
Jun 23 07:14:36.890: INFO: Pod "alpine-nnp-nil-87c3b6bb-ea72-4954-9360-5ea60843dd35": Phase="Pending", Reason="", readiness=false. Elapsed: 51.740295ms
Jun 23 07:14:38.895: INFO: Pod "alpine-nnp-nil-87c3b6bb-ea72-4954-9360-5ea60843dd35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05697987s
Jun 23 07:14:40.898: INFO: Pod "alpine-nnp-nil-87c3b6bb-ea72-4954-9360-5ea60843dd35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05957603s
Jun 23 07:14:42.896: INFO: Pod "alpine-nnp-nil-87c3b6bb-ea72-4954-9360-5ea60843dd35": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057471425s
Jun 23 07:14:44.895: INFO: Pod "alpine-nnp-nil-87c3b6bb-ea72-4954-9360-5ea60843dd35": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05717426s
Jun 23 07:14:46.893: INFO: Pod "alpine-nnp-nil-87c3b6bb-ea72-4954-9360-5ea60843dd35": Phase="Running", Reason="", readiness=true. Elapsed: 10.055269233s
Jun 23 07:14:48.895: INFO: Pod "alpine-nnp-nil-87c3b6bb-ea72-4954-9360-5ea60843dd35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.057086903s
Jun 23 07:14:48.895: INFO: Pod "alpine-nnp-nil-87c3b6bb-ea72-4954-9360-5ea60843dd35" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 07:14:48.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6858" for this suite.


... skipping 2 lines ...
test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  test/e2e/common/node/security_context.go:298
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    test/e2e/common/node/security_context.go:337
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:14:49.025: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/framework/framework.go:187

... skipping 21 lines ...
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
  test/e2e/node/security_context.go:171
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 23 07:14:36.561: INFO: Waiting up to 5m0s for pod "security-context-68cd647c-de17-4b10-bbd7-53716458507c" in namespace "security-context-8638" to be "Succeeded or Failed"
Jun 23 07:14:36.599: INFO: Pod "security-context-68cd647c-de17-4b10-bbd7-53716458507c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.201227ms
Jun 23 07:14:38.603: INFO: Pod "security-context-68cd647c-de17-4b10-bbd7-53716458507c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041931126s
Jun 23 07:14:40.603: INFO: Pod "security-context-68cd647c-de17-4b10-bbd7-53716458507c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041157994s
Jun 23 07:14:42.603: INFO: Pod "security-context-68cd647c-de17-4b10-bbd7-53716458507c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041485196s
Jun 23 07:14:44.603: INFO: Pod "security-context-68cd647c-de17-4b10-bbd7-53716458507c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041293968s
Jun 23 07:14:46.603: INFO: Pod "security-context-68cd647c-de17-4b10-bbd7-53716458507c": Phase="Running", Reason="", readiness=true. Elapsed: 10.041379032s
Jun 23 07:14:48.609: INFO: Pod "security-context-68cd647c-de17-4b10-bbd7-53716458507c": Phase="Running", Reason="", readiness=false. Elapsed: 12.047026045s
Jun 23 07:14:50.610: INFO: Pod "security-context-68cd647c-de17-4b10-bbd7-53716458507c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.048583821s
STEP: Saw pod success
Jun 23 07:14:50.610: INFO: Pod "security-context-68cd647c-de17-4b10-bbd7-53716458507c" satisfied condition "Succeeded or Failed"
Jun 23 07:14:50.615: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod security-context-68cd647c-de17-4b10-bbd7-53716458507c container test-container: <nil>
STEP: delete the pod
Jun 23 07:14:50.651: INFO: Waiting for pod security-context-68cd647c-de17-4b10-bbd7-53716458507c to disappear
Jun 23 07:14:50.654: INFO: Pod security-context-68cd647c-de17-4b10-bbd7-53716458507c no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:14.952 seconds]
[sig-node] Security Context
test/e2e/node/framework.go:23
  should support seccomp unconfined on the pod [LinuxOnly]
  test/e2e/node/security_context.go:171
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":1,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:14:50.699: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-3999809b-f369-427c-853c-c011bc9ae780
STEP: Creating a pod to test consume configMaps
Jun 23 07:14:35.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca" in namespace "configmap-279" to be "Succeeded or Failed"
Jun 23 07:14:35.625: INFO: Pod "pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca": Phase="Pending", Reason="", readiness=false. Elapsed: 15.145231ms
Jun 23 07:14:37.691: INFO: Pod "pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080978961s
Jun 23 07:14:39.632: INFO: Pod "pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021758926s
Jun 23 07:14:41.632: INFO: Pod "pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021631015s
Jun 23 07:14:43.631: INFO: Pod "pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020709652s
Jun 23 07:14:45.645: INFO: Pod "pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.034530408s
Jun 23 07:14:47.662: INFO: Pod "pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.051435196s
Jun 23 07:14:49.631: INFO: Pod "pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.020912998s
STEP: Saw pod success
Jun 23 07:14:49.631: INFO: Pod "pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca" satisfied condition "Succeeded or Failed"
Jun 23 07:14:49.634: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:14:50.848: INFO: Waiting for pod pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca to disappear
Jun 23 07:14:50.851: INFO: Pod pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:15.407 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:14:50.901: INFO: Only supported for providers [aws] (not gce)
... skipping 105 lines ...
test/e2e/storage/utils/framework.go:23
  ConfigMap
  test/e2e/storage/volumes.go:49
    should be mountable
    test/e2e/storage/volumes.go:50
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":1,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:14:53.068: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 119 lines ...
• [SLOW TEST:5.585 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:14:56.521: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
... skipping 70 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:14:56.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-5165" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":3,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:14:56.638: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 130 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:240
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:14:48.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea" in namespace "projected-3557" to be "Succeeded or Failed"
Jun 23 07:14:48.138: INFO: Pod "downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129982ms
Jun 23 07:14:50.142: INFO: Pod "downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011981782s
Jun 23 07:14:52.142: INFO: Pod "downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012152744s
Jun 23 07:14:54.142: INFO: Pod "downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011735028s
Jun 23 07:14:56.164: INFO: Pod "downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033968954s
Jun 23 07:14:58.143: INFO: Pod "downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea": Phase="Pending", Reason="", readiness=false. Elapsed: 10.013034127s
Jun 23 07:15:00.143: INFO: Pod "downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.013368604s
STEP: Saw pod success
Jun 23 07:15:00.143: INFO: Pod "downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea" satisfied condition "Succeeded or Failed"
Jun 23 07:15:00.146: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea container client-container: <nil>
STEP: delete the pod
Jun 23 07:15:00.161: INFO: Waiting for pod downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea to disappear
Jun 23 07:15:00.167: INFO: Pod downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:12.101 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:00.189: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 55 lines ...
• [SLOW TEST:13.145 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:01.040: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/framework/framework.go:187

... skipping 31 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:15:01.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9402" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":3,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:01.119: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/framework/framework.go:187

... skipping 46 lines ...
• [SLOW TEST:26.350 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  test/e2e/common/storage/projected_secret.go:92
STEP: Creating projection with secret that has name projected-secret-test-31829742-5c43-4d21-8319-a7b90f6dcb33
STEP: Creating a pod to test consume secrets
Jun 23 07:14:56.789: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dc000de4-dbb6-4650-871d-3057567764c6" in namespace "projected-7961" to be "Succeeded or Failed"
Jun 23 07:14:56.808: INFO: Pod "pod-projected-secrets-dc000de4-dbb6-4650-871d-3057567764c6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.181797ms
Jun 23 07:14:58.813: INFO: Pod "pod-projected-secrets-dc000de4-dbb6-4650-871d-3057567764c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023822585s
Jun 23 07:15:00.828: INFO: Pod "pod-projected-secrets-dc000de4-dbb6-4650-871d-3057567764c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038603276s
Jun 23 07:15:02.814: INFO: Pod "pod-projected-secrets-dc000de4-dbb6-4650-871d-3057567764c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024762086s
STEP: Saw pod success
Jun 23 07:15:02.814: INFO: Pod "pod-projected-secrets-dc000de4-dbb6-4650-871d-3057567764c6" satisfied condition "Succeeded or Failed"
Jun 23 07:15:02.820: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-projected-secrets-dc000de4-dbb6-4650-871d-3057567764c6 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:15:02.875: INFO: Waiting for pod pod-projected-secrets-dc000de4-dbb6-4650-871d-3057567764c6 to disappear
Jun 23 07:15:02.883: INFO: Pod pod-projected-secrets-dc000de4-dbb6-4650-871d-3057567764c6 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
... skipping 5 lines ...
• [SLOW TEST:6.189 seconds]
[sig-storage] Projected secret
test/e2e/common/storage/framework.go:23
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  test/e2e/common/storage/projected_secret.go:92
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Aggregator
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 36 lines ...
• [SLOW TEST:28.112 seconds]
[sig-api-machinery] Aggregator
test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 25 lines ...
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:15:02.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail when exceeds active deadline
  test/e2e/apps/job.go:293
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:187
Jun 23 07:15:05.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2642" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":3,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:05.055: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 33 lines ...
STEP: Destroying namespace "node-problem-detector-8469" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds]
[sig-node] NodeProblemDetector
test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  test/e2e/node/node_problem_detector.go:62

  Only supported for node OS distro [gci ubuntu] (not debian)

  test/e2e/node/node_problem_detector.go:58
------------------------------
... skipping 56 lines ...
Jun 23 07:14:53.823: INFO: PersistentVolumeClaim pvc-p98qp found but phase is Pending instead of Bound.
Jun 23 07:14:55.828: INFO: PersistentVolumeClaim pvc-p98qp found and phase=Bound (10.030864118s)
Jun 23 07:14:55.828: INFO: Waiting up to 3m0s for PersistentVolume local-tk8wd to have phase Bound
Jun 23 07:14:55.831: INFO: PersistentVolume local-tk8wd found and phase=Bound (3.120601ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8dzq
STEP: Creating a pod to test subpath
Jun 23 07:14:55.842: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8dzq" in namespace "provisioning-2171" to be "Succeeded or Failed"
Jun 23 07:14:55.849: INFO: Pod "pod-subpath-test-preprovisionedpv-8dzq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.687616ms
Jun 23 07:14:57.853: INFO: Pod "pod-subpath-test-preprovisionedpv-8dzq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010655566s
Jun 23 07:14:59.854: INFO: Pod "pod-subpath-test-preprovisionedpv-8dzq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011485592s
Jun 23 07:15:01.867: INFO: Pod "pod-subpath-test-preprovisionedpv-8dzq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024293843s
Jun 23 07:15:03.854: INFO: Pod "pod-subpath-test-preprovisionedpv-8dzq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011637128s
Jun 23 07:15:05.855: INFO: Pod "pod-subpath-test-preprovisionedpv-8dzq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.012475621s
STEP: Saw pod success
Jun 23 07:15:05.855: INFO: Pod "pod-subpath-test-preprovisionedpv-8dzq" satisfied condition "Succeeded or Failed"
Jun 23 07:15:05.858: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-subpath-test-preprovisionedpv-8dzq container test-container-subpath-preprovisionedpv-8dzq: <nil>
STEP: delete the pod
Jun 23 07:15:05.875: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8dzq to disappear
Jun 23 07:15:05.879: INFO: Pod pod-subpath-test-preprovisionedpv-8dzq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8dzq
Jun 23 07:15:05.879: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8dzq" in namespace "provisioning-2171"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:367
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:06.057: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 58 lines ...
      test/e2e/storage/testsuites/subpath.go:221

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:14:56.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
• [SLOW TEST:10.160 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should update/patch PodDisruptionBudget status [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 86 lines ...
• [SLOW TEST:31.637 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:07.252: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 130 lines ...
• [SLOW TEST:35.885 seconds]
[sig-network] EndpointSlice
test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:11.462: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 130 lines ...
• [SLOW TEST:16.405 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 28 lines ...
Jun 23 07:14:55.206: INFO: PersistentVolumeClaim pvc-q6fx5 found but phase is Pending instead of Bound.
Jun 23 07:14:57.210: INFO: PersistentVolumeClaim pvc-q6fx5 found and phase=Bound (4.043126358s)
Jun 23 07:14:57.210: INFO: Waiting up to 3m0s for PersistentVolume local-zlqbn to have phase Bound
Jun 23 07:14:57.213: INFO: PersistentVolume local-zlqbn found and phase=Bound (2.926889ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-vk4v
STEP: Creating a pod to test exec-volume-test
Jun 23 07:14:57.225: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-vk4v" in namespace "volume-2832" to be "Succeeded or Failed"
Jun 23 07:14:57.229: INFO: Pod "exec-volume-test-preprovisionedpv-vk4v": Phase="Pending", Reason="", readiness=false. Elapsed: 3.177942ms
Jun 23 07:14:59.233: INFO: Pod "exec-volume-test-preprovisionedpv-vk4v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007823129s
Jun 23 07:15:01.235: INFO: Pod "exec-volume-test-preprovisionedpv-vk4v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009650204s
Jun 23 07:15:03.288: INFO: Pod "exec-volume-test-preprovisionedpv-vk4v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062206002s
Jun 23 07:15:05.240: INFO: Pod "exec-volume-test-preprovisionedpv-vk4v": Phase="Running", Reason="", readiness=true. Elapsed: 8.014725534s
Jun 23 07:15:07.238: INFO: Pod "exec-volume-test-preprovisionedpv-vk4v": Phase="Running", Reason="", readiness=true. Elapsed: 10.012506539s
Jun 23 07:15:09.245: INFO: Pod "exec-volume-test-preprovisionedpv-vk4v": Phase="Running", Reason="", readiness=true. Elapsed: 12.019402502s
Jun 23 07:15:11.233: INFO: Pod "exec-volume-test-preprovisionedpv-vk4v": Phase="Running", Reason="", readiness=true. Elapsed: 14.007923972s
Jun 23 07:15:13.234: INFO: Pod "exec-volume-test-preprovisionedpv-vk4v": Phase="Running", Reason="", readiness=true. Elapsed: 16.008868522s
Jun 23 07:15:15.245: INFO: Pod "exec-volume-test-preprovisionedpv-vk4v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.018956198s
STEP: Saw pod success
Jun 23 07:15:15.245: INFO: Pod "exec-volume-test-preprovisionedpv-vk4v" satisfied condition "Succeeded or Failed"
Jun 23 07:15:15.248: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod exec-volume-test-preprovisionedpv-vk4v container exec-container-preprovisionedpv-vk4v: <nil>
STEP: delete the pod
Jun 23 07:15:15.269: INFO: Waiting for pod exec-volume-test-preprovisionedpv-vk4v to disappear
Jun 23 07:15:15.284: INFO: Pod exec-volume-test-preprovisionedpv-vk4v no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-vk4v
Jun 23 07:15:15.284: INFO: Deleting pod "exec-volume-test-preprovisionedpv-vk4v" in namespace "volume-2832"
... skipping 28 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:15.740: INFO: Only supported for providers [azure] (not gce)
... skipping 14 lines ...
      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:2079
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:15:04.003: INFO: >>> kubeConfig: /root/.kube/config
... skipping 3 lines ...
[It] should allow exec of files on the volume
  test/e2e/storage/testsuites/volumes.go:198
Jun 23 07:15:04.051: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 23 07:15:04.051: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-vljl
STEP: Creating a pod to test exec-volume-test
Jun 23 07:15:04.081: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-vljl" in namespace "volume-9326" to be "Succeeded or Failed"
Jun 23 07:15:04.085: INFO: Pod "exec-volume-test-inlinevolume-vljl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.626843ms
Jun 23 07:15:06.096: INFO: Pod "exec-volume-test-inlinevolume-vljl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014440941s
Jun 23 07:15:08.091: INFO: Pod "exec-volume-test-inlinevolume-vljl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009479297s
Jun 23 07:15:10.090: INFO: Pod "exec-volume-test-inlinevolume-vljl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008357888s
Jun 23 07:15:12.090: INFO: Pod "exec-volume-test-inlinevolume-vljl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008665406s
Jun 23 07:15:14.090: INFO: Pod "exec-volume-test-inlinevolume-vljl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008233428s
Jun 23 07:15:16.090: INFO: Pod "exec-volume-test-inlinevolume-vljl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.00869226s
STEP: Saw pod success
Jun 23 07:15:16.090: INFO: Pod "exec-volume-test-inlinevolume-vljl" satisfied condition "Succeeded or Failed"
Jun 23 07:15:16.093: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod exec-volume-test-inlinevolume-vljl container exec-container-inlinevolume-vljl: <nil>
STEP: delete the pod
Jun 23 07:15:16.143: INFO: Waiting for pod exec-volume-test-inlinevolume-vljl to disappear
Jun 23 07:15:16.161: INFO: Pod exec-volume-test-inlinevolume-vljl no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-vljl
Jun 23 07:15:16.161: INFO: Deleting pod "exec-volume-test-inlinevolume-vljl" in namespace "volume-9326"
... skipping 10 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":9,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:16.207: INFO: Only supported for providers [azure] (not gce)
... skipping 52 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:382
Jun 23 07:15:06.164: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 07:15:06.170: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-996z
STEP: Creating a pod to test subpath
Jun 23 07:15:06.183: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-996z" in namespace "provisioning-2389" to be "Succeeded or Failed"
Jun 23 07:15:06.193: INFO: Pod "pod-subpath-test-inlinevolume-996z": Phase="Pending", Reason="", readiness=false. Elapsed: 9.591002ms
Jun 23 07:15:08.199: INFO: Pod "pod-subpath-test-inlinevolume-996z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015821885s
Jun 23 07:15:10.200: INFO: Pod "pod-subpath-test-inlinevolume-996z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016304645s
Jun 23 07:15:12.198: INFO: Pod "pod-subpath-test-inlinevolume-996z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014834817s
Jun 23 07:15:14.199: INFO: Pod "pod-subpath-test-inlinevolume-996z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015126396s
Jun 23 07:15:16.198: INFO: Pod "pod-subpath-test-inlinevolume-996z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014885606s
STEP: Saw pod success
Jun 23 07:15:16.198: INFO: Pod "pod-subpath-test-inlinevolume-996z" satisfied condition "Succeeded or Failed"
Jun 23 07:15:16.214: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod pod-subpath-test-inlinevolume-996z container test-container-subpath-inlinevolume-996z: <nil>
STEP: delete the pod
Jun 23 07:15:16.232: INFO: Waiting for pod pod-subpath-test-inlinevolume-996z to disappear
Jun 23 07:15:16.238: INFO: Pod pod-subpath-test-inlinevolume-996z no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-996z
Jun 23 07:15:16.238: INFO: Deleting pod "pod-subpath-test-inlinevolume-996z" in namespace "provisioning-2389"
... skipping 51 lines ...
Jun 23 07:14:54.600: INFO: PersistentVolumeClaim pvc-25ltd found but phase is Pending instead of Bound.
Jun 23 07:14:56.604: INFO: PersistentVolumeClaim pvc-25ltd found and phase=Bound (4.013435361s)
Jun 23 07:14:56.604: INFO: Waiting up to 3m0s for PersistentVolume local-7426s to have phase Bound
Jun 23 07:14:56.607: INFO: PersistentVolume local-7426s found and phase=Bound (2.682149ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vg8w
STEP: Creating a pod to test subpath
Jun 23 07:14:56.621: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vg8w" in namespace "provisioning-4133" to be "Succeeded or Failed"
Jun 23 07:14:56.627: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w": Phase="Pending", Reason="", readiness=false. Elapsed: 5.996971ms
Jun 23 07:14:58.636: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014347503s
Jun 23 07:15:00.673: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051171477s
Jun 23 07:15:02.655: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033571889s
Jun 23 07:15:04.633: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011926577s
Jun 23 07:15:06.648: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026783174s
... skipping 3 lines ...
Jun 23 07:15:14.639: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w": Phase="Pending", Reason="", readiness=false. Elapsed: 18.017739935s
Jun 23 07:15:16.647: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w": Phase="Pending", Reason="", readiness=false. Elapsed: 20.025868748s
Jun 23 07:15:18.639: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w": Phase="Pending", Reason="", readiness=false. Elapsed: 22.017424899s
Jun 23 07:15:20.633: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w": Phase="Pending", Reason="", readiness=false. Elapsed: 24.011152111s
Jun 23 07:15:22.636: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.0142646s
STEP: Saw pod success
Jun 23 07:15:22.636: INFO: Pod "pod-subpath-test-preprovisionedpv-vg8w" satisfied condition "Succeeded or Failed"
Jun 23 07:15:22.639: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-subpath-test-preprovisionedpv-vg8w container test-container-volume-preprovisionedpv-vg8w: <nil>
STEP: delete the pod
Jun 23 07:15:22.658: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vg8w to disappear
Jun 23 07:15:22.662: INFO: Pod pod-subpath-test-preprovisionedpv-vg8w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vg8w
Jun 23 07:15:22.662: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vg8w" in namespace "provisioning-4133"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:22.897: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 164 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  test/e2e/common/storage/host_path.go:39
[It] should support subPath [NodeConformance]
  test/e2e/common/storage/host_path.go:95
STEP: Creating a pod to test hostPath subPath
Jun 23 07:15:11.551: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3036" to be "Succeeded or Failed"
Jun 23 07:15:11.558: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.772062ms
Jun 23 07:15:13.562: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011521721s
Jun 23 07:15:15.574: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022812072s
Jun 23 07:15:17.563: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012228813s
Jun 23 07:15:19.562: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011464726s
Jun 23 07:15:21.564: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.013264857s
Jun 23 07:15:23.565: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=true. Elapsed: 12.013902329s
Jun 23 07:15:25.569: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.018214822s
STEP: Saw pod success
Jun 23 07:15:25.569: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jun 23 07:15:25.573: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Jun 23 07:15:25.604: INFO: Waiting for pod pod-host-path-test to disappear
Jun 23 07:15:25.610: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:14.109 seconds]
[sig-storage] HostPath
test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  test/e2e/common/storage/host_path.go:95
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":2,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:25.641: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 60 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-file]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:2079
------------------------------
... skipping 159 lines ...
Jun 23 07:14:54.863: INFO: PersistentVolumeClaim pvc-rv8cq found but phase is Pending instead of Bound.
Jun 23 07:14:56.873: INFO: PersistentVolumeClaim pvc-rv8cq found and phase=Bound (10.038624986s)
Jun 23 07:14:56.873: INFO: Waiting up to 3m0s for PersistentVolume local-47dlf to have phase Bound
Jun 23 07:14:56.876: INFO: PersistentVolume local-47dlf found and phase=Bound (3.356476ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pb5d
STEP: Creating a pod to test subpath
Jun 23 07:14:56.893: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pb5d" in namespace "provisioning-7756" to be "Succeeded or Failed"
Jun 23 07:14:56.897: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098558ms
Jun 23 07:14:58.902: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009005317s
Jun 23 07:15:00.901: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007559683s
Jun 23 07:15:02.907: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014153094s
Jun 23 07:15:04.903: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009467293s
Jun 23 07:15:06.929: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.035958093s
Jun 23 07:15:08.904: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.010832752s
Jun 23 07:15:10.901: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.008309718s
STEP: Saw pod success
Jun 23 07:15:10.901: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d" satisfied condition "Succeeded or Failed"
Jun 23 07:15:10.906: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-subpath-test-preprovisionedpv-pb5d container test-container-subpath-preprovisionedpv-pb5d: <nil>
STEP: delete the pod
Jun 23 07:15:10.924: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pb5d to disappear
Jun 23 07:15:10.927: INFO: Pod pod-subpath-test-preprovisionedpv-pb5d no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pb5d
Jun 23 07:15:10.927: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pb5d" in namespace "provisioning-7756"
STEP: Creating pod pod-subpath-test-preprovisionedpv-pb5d
STEP: Creating a pod to test subpath
Jun 23 07:15:10.939: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pb5d" in namespace "provisioning-7756" to be "Succeeded or Failed"
Jun 23 07:15:10.942: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.10275ms
Jun 23 07:15:12.948: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00892858s
Jun 23 07:15:14.946: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007112042s
Jun 23 07:15:16.951: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012399765s
Jun 23 07:15:18.949: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010061362s
Jun 23 07:15:20.950: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.011012898s
Jun 23 07:15:22.947: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.008334575s
Jun 23 07:15:24.947: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.007678307s
Jun 23 07:15:26.946: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.006978166s
STEP: Saw pod success
Jun 23 07:15:26.946: INFO: Pod "pod-subpath-test-preprovisionedpv-pb5d" satisfied condition "Succeeded or Failed"
Jun 23 07:15:26.949: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-subpath-test-preprovisionedpv-pb5d container test-container-subpath-preprovisionedpv-pb5d: <nil>
STEP: delete the pod
Jun 23 07:15:26.969: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pb5d to disappear
Jun 23 07:15:26.974: INFO: Pod pod-subpath-test-preprovisionedpv-pb5d no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pb5d
Jun 23 07:15:26.974: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pb5d" in namespace "provisioning-7756"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      test/e2e/storage/testsuites/subpath.go:397
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating projection with secret that has name projected-secret-test-map-c198744f-3e86-4cd0-8cc8-24868ea61b9b
STEP: Creating a pod to test consume secrets
Jun 23 07:15:16.324: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe" in namespace "projected-9011" to be "Succeeded or Failed"
Jun 23 07:15:16.328: INFO: Pod "pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.403474ms
Jun 23 07:15:18.332: INFO: Pod "pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008169331s
Jun 23 07:15:20.334: INFO: Pod "pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009329s
Jun 23 07:15:22.333: INFO: Pod "pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008508093s
Jun 23 07:15:24.333: INFO: Pod "pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008900753s
Jun 23 07:15:26.332: INFO: Pod "pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.007364586s
Jun 23 07:15:28.332: INFO: Pod "pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.007329322s
STEP: Saw pod success
Jun 23 07:15:28.332: INFO: Pod "pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe" satisfied condition "Succeeded or Failed"
Jun 23 07:15:28.335: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:15:28.359: INFO: Waiting for pod pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe to disappear
Jun 23 07:15:28.369: INFO: Pod pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:12.136 seconds]
[sig-storage] Projected secret
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":24,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:28.426: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 96 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:447
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:29.103: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 184 lines ...
test/e2e/kubectl/framework.go:23
  Update Demo
  test/e2e/kubectl/kubectl.go:322
    should scale a replication controller  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:32.648: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun 23 07:15:25.456: INFO: Waiting up to 5m0s for pod "pod-0c3ab368-8a60-445e-852c-cc516c7967a8" in namespace "emptydir-292" to be "Succeeded or Failed"
Jun 23 07:15:25.466: INFO: Pod "pod-0c3ab368-8a60-445e-852c-cc516c7967a8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.567214ms
Jun 23 07:15:27.471: INFO: Pod "pod-0c3ab368-8a60-445e-852c-cc516c7967a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014724366s
Jun 23 07:15:29.471: INFO: Pod "pod-0c3ab368-8a60-445e-852c-cc516c7967a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014550835s
Jun 23 07:15:31.470: INFO: Pod "pod-0c3ab368-8a60-445e-852c-cc516c7967a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013387249s
Jun 23 07:15:33.472: INFO: Pod "pod-0c3ab368-8a60-445e-852c-cc516c7967a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0150251s
Jun 23 07:15:35.471: INFO: Pod "pod-0c3ab368-8a60-445e-852c-cc516c7967a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014476969s
STEP: Saw pod success
Jun 23 07:15:35.471: INFO: Pod "pod-0c3ab368-8a60-445e-852c-cc516c7967a8" satisfied condition "Succeeded or Failed"
Jun 23 07:15:35.474: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-0c3ab368-8a60-445e-852c-cc516c7967a8 container test-container: <nil>
STEP: delete the pod
Jun 23 07:15:35.491: INFO: Waiting for pod pod-0c3ab368-8a60-445e-852c-cc516c7967a8 to disappear
Jun 23 07:15:35.496: INFO: Pod pod-0c3ab368-8a60-445e-852c-cc516c7967a8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:10.095 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 16 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:15:35.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-1493" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":3,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 98 lines ...
• [SLOW TEST:60.659 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  test/e2e/apps/deployment.go:135
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":1,"skipped":11,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:36.316: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test override command
Jun 23 07:15:25.864: INFO: Waiting up to 5m0s for pod "client-containers-3013da87-b924-47a7-aeb6-0309697e8985" in namespace "containers-4203" to be "Succeeded or Failed"
Jun 23 07:15:25.868: INFO: Pod "client-containers-3013da87-b924-47a7-aeb6-0309697e8985": Phase="Pending", Reason="", readiness=false. Elapsed: 3.353943ms
Jun 23 07:15:27.943: INFO: Pod "client-containers-3013da87-b924-47a7-aeb6-0309697e8985": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078646708s
Jun 23 07:15:29.872: INFO: Pod "client-containers-3013da87-b924-47a7-aeb6-0309697e8985": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007561853s
Jun 23 07:15:31.874: INFO: Pod "client-containers-3013da87-b924-47a7-aeb6-0309697e8985": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009802473s
Jun 23 07:15:33.873: INFO: Pod "client-containers-3013da87-b924-47a7-aeb6-0309697e8985": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009012803s
Jun 23 07:15:35.875: INFO: Pod "client-containers-3013da87-b924-47a7-aeb6-0309697e8985": Phase="Pending", Reason="", readiness=false. Elapsed: 10.010703363s
Jun 23 07:15:37.873: INFO: Pod "client-containers-3013da87-b924-47a7-aeb6-0309697e8985": Phase="Pending", Reason="", readiness=false. Elapsed: 12.008594083s
Jun 23 07:15:39.874: INFO: Pod "client-containers-3013da87-b924-47a7-aeb6-0309697e8985": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.009625757s
STEP: Saw pod success
Jun 23 07:15:39.874: INFO: Pod "client-containers-3013da87-b924-47a7-aeb6-0309697e8985" satisfied condition "Succeeded or Failed"
Jun 23 07:15:39.882: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod client-containers-3013da87-b924-47a7-aeb6-0309697e8985 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:15:39.909: INFO: Waiting for pod client-containers-3013da87-b924-47a7-aeb6-0309697e8985 to disappear
Jun 23 07:15:39.912: INFO: Pod client-containers-3013da87-b924-47a7-aeb6-0309697e8985 no longer exists
[AfterEach] [sig-node] Containers
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:14.093 seconds]
[sig-node] Containers
test/e2e/common/node/framework.go:23
  should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":35,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:15:40.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1280" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":4,"skipped":40,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:41.038: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 183 lines ...
      test/e2e/storage/testsuites/subpath.go:221

      Only supported for providers [aws] (not gce)

      test/e2e/storage/drivers/in_tree.go:1722
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":2,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:41.081: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 99 lines ...
STEP: Destroying namespace "services-7753" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:41.192: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 84 lines ...
• [SLOW TEST:12.114 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  test/e2e/apps/deployment.go:138
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":4,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:41.251: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 270 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:15:41.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-2301" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should accurately determine present and missing resources","total":-1,"completed":5,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:41.491: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 51 lines ...
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver "csi-hostpath" does not support topology - skipping

      test/e2e/storage/testsuites/topology.go:93
------------------------------
... skipping 68 lines ...
Jun 23 07:15:31.150: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:647
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jun 23 07:15:43.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3545" for this suite.
STEP: Destroying namespace "webhook-3545-markers" for this suite.
... skipping 4 lines ...
• [SLOW TEST:27.671 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":3,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:43.439: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 251 lines ...
test/e2e/node/framework.go:23
  Clean up pods on node
  test/e2e/node/kubelet.go:281
    kubelet should be able to delete 10 pods per node in 1m0s.
    test/e2e/node/kubelet.go:343
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":5,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
  test/e2e/common/storage/empty_dir.go:55
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun 23 07:15:41.646: INFO: Waiting up to 5m0s for pod "pod-c5d21adf-e0ba-40bc-ac63-0abb4aac36bc" in namespace "emptydir-8432" to be "Succeeded or Failed"
Jun 23 07:15:41.665: INFO: Pod "pod-c5d21adf-e0ba-40bc-ac63-0abb4aac36bc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.02838ms
Jun 23 07:15:43.673: INFO: Pod "pod-c5d21adf-e0ba-40bc-ac63-0abb4aac36bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026491061s
Jun 23 07:15:45.669: INFO: Pod "pod-c5d21adf-e0ba-40bc-ac63-0abb4aac36bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022340106s
Jun 23 07:15:47.669: INFO: Pod "pod-c5d21adf-e0ba-40bc-ac63-0abb4aac36bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022624698s
STEP: Saw pod success
Jun 23 07:15:47.669: INFO: Pod "pod-c5d21adf-e0ba-40bc-ac63-0abb4aac36bc" satisfied condition "Succeeded or Failed"
Jun 23 07:15:47.674: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-c5d21adf-e0ba-40bc-ac63-0abb4aac36bc container test-container: <nil>
STEP: delete the pod
Jun 23 07:15:47.695: INFO: Waiting for pod pod-c5d21adf-e0ba-40bc-ac63-0abb4aac36bc to disappear
Jun 23 07:15:47.702: INFO: Pod pod-c5d21adf-e0ba-40bc-ac63-0abb4aac36bc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 6 lines ...
test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is root
    test/e2e/common/storage/empty_dir.go:55
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":6,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:47.730: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 58 lines ...
      Only supported for providers [aws] (not gce)

      test/e2e/storage/drivers/in_tree.go:1722
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0}
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:15:42.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
• [SLOW TEST:6.222 seconds]
[sig-storage] EmptyDir wrapper volumes
test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:48.324: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 84 lines ...
• [SLOW TEST:56.224 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should delete pods when suspended
  test/e2e/apps/job.go:141
------------------------------
{"msg":"PASSED [sig-apps] Job should delete pods when suspended","total":-1,"completed":2,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 115 lines ...
test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  test/e2e/storage/csi_mock_volume.go:332
    should require VolumeAttach for drivers with attachment
    test/e2e/storage/csi_mock_volume.go:360
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:50.570: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 59 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:15:51.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6742" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
Jun 23 07:15:49.378: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-61c0494c-2abf-48db-8375-8e0c34cdf6f2" in namespace "security-context-test-3435" to be "Succeeded or Failed"
Jun 23 07:15:49.387: INFO: Pod "busybox-privileged-false-61c0494c-2abf-48db-8375-8e0c34cdf6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.809894ms
Jun 23 07:15:51.397: INFO: Pod "busybox-privileged-false-61c0494c-2abf-48db-8375-8e0c34cdf6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0187518s
Jun 23 07:15:53.395: INFO: Pod "busybox-privileged-false-61c0494c-2abf-48db-8375-8e0c34cdf6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016474344s
Jun 23 07:15:55.402: INFO: Pod "busybox-privileged-false-61c0494c-2abf-48db-8375-8e0c34cdf6f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023471599s
Jun 23 07:15:55.402: INFO: Pod "busybox-privileged-false-61c0494c-2abf-48db-8375-8e0c34cdf6f2" satisfied condition "Succeeded or Failed"
Jun 23 07:15:55.415: INFO: Got logs for pod "busybox-privileged-false-61c0494c-2abf-48db-8375-8e0c34cdf6f2": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 07:15:55.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3435" for this suite.

... skipping 3 lines ...
test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  test/e2e/common/node/security_context.go:234
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:55.449: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-c375009a-a7c1-4619-805a-62bf11d86a31
STEP: Creating a pod to test consume secrets
Jun 23 07:15:41.556: INFO: Waiting up to 5m0s for pod "pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55" in namespace "secrets-9284" to be "Succeeded or Failed"
Jun 23 07:15:41.580: INFO: Pod "pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55": Phase="Pending", Reason="", readiness=false. Elapsed: 24.150522ms
Jun 23 07:15:43.590: INFO: Pod "pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034144748s
Jun 23 07:15:45.585: INFO: Pod "pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029120251s
Jun 23 07:15:47.588: INFO: Pod "pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03172214s
Jun 23 07:15:49.587: INFO: Pod "pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031153145s
Jun 23 07:15:51.585: INFO: Pod "pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029015739s
Jun 23 07:15:53.586: INFO: Pod "pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029440332s
Jun 23 07:15:55.593: INFO: Pod "pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.036979174s
STEP: Saw pod success
Jun 23 07:15:55.593: INFO: Pod "pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55" satisfied condition "Succeeded or Failed"
Jun 23 07:15:55.605: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55 container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:15:55.642: INFO: Waiting for pod pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55 to disappear
Jun 23 07:15:55.647: INFO: Pod pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:14.149 seconds]
[sig-storage] Secrets
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":94,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:55.667: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 86 lines ...
Jun 23 07:15:21.821: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 18.01011529s
Jun 23 07:15:23.823: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 20.011986125s
Jun 23 07:15:23.823: INFO: Pod "test-pod" satisfied condition "running"
STEP: Creating statefulset with conflicting port in namespace statefulset-3707
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3707
Jun 23 07:15:23.850: INFO: Observed stateful pod in namespace: statefulset-3707, name: ss-0, uid: 14b67fd4-483f-48a8-9d36-0dfa3286c8b4, status phase: Pending. Waiting for statefulset controller to delete.
Jun 23 07:15:29.547: INFO: Observed stateful pod in namespace: statefulset-3707, name: ss-0, uid: 14b67fd4-483f-48a8-9d36-0dfa3286c8b4, status phase: Failed. Waiting for statefulset controller to delete.
Jun 23 07:15:29.558: INFO: Observed stateful pod in namespace: statefulset-3707, name: ss-0, uid: 14b67fd4-483f-48a8-9d36-0dfa3286c8b4, status phase: Failed. Waiting for statefulset controller to delete.
Jun 23 07:15:29.563: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3707
STEP: Removing pod with conflicting port in namespace statefulset-3707
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3707 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:122
Jun 23 07:15:45.682: INFO: Deleting all statefulset in ns statefulset-3707
... skipping 11 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    Should recreate evicted statefulset [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:15:55.825: INFO: Only supported for providers [vsphere] (not gce)
... skipping 27 lines ...
[It] should support existing single file [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:221
Jun 23 07:15:47.835: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 07:15:47.840: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-2hmx
STEP: Creating a pod to test subpath
Jun 23 07:15:47.851: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-2hmx" in namespace "provisioning-358" to be "Succeeded or Failed"
Jun 23 07:15:47.859: INFO: Pod "pod-subpath-test-inlinevolume-2hmx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061406ms
Jun 23 07:15:49.864: INFO: Pod "pod-subpath-test-inlinevolume-2hmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012869014s
Jun 23 07:15:51.869: INFO: Pod "pod-subpath-test-inlinevolume-2hmx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017393344s
Jun 23 07:15:53.865: INFO: Pod "pod-subpath-test-inlinevolume-2hmx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013386259s
Jun 23 07:15:55.873: INFO: Pod "pod-subpath-test-inlinevolume-2hmx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021579956s
Jun 23 07:15:57.865: INFO: Pod "pod-subpath-test-inlinevolume-2hmx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.013798916s
STEP: Saw pod success
Jun 23 07:15:57.865: INFO: Pod "pod-subpath-test-inlinevolume-2hmx" satisfied condition "Succeeded or Failed"
Jun 23 07:15:57.869: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod pod-subpath-test-inlinevolume-2hmx container test-container-subpath-inlinevolume-2hmx: <nil>
STEP: delete the pod
Jun 23 07:15:57.889: INFO: Waiting for pod pod-subpath-test-inlinevolume-2hmx to disappear
Jun 23 07:15:57.892: INFO: Pod pod-subpath-test-inlinevolume-2hmx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-2hmx
Jun 23 07:15:57.892: INFO: Deleting pod "pod-subpath-test-inlinevolume-2hmx" in namespace "provisioning-358"
... skipping 133 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:00.259: INFO: Only supported for providers [azure] (not gce)
... skipping 155 lines ...
  test/e2e/common/node/runtime.go:43
    when running a container with a new image
    test/e2e/common/node/runtime.go:259
      should not be able to pull image from invalid registry [NodeConformance]
      test/e2e/common/node/runtime.go:370
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":5,"skipped":102,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 37 lines ...
[It] should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:647
STEP: Waiting for log generator to start.
Jun 23 07:15:43.980: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jun 23 07:15:43.980: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7890" to be "running and ready, or succeeded"
Jun 23 07:15:43.995: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 14.922013ms
Jun 23 07:15:43.995: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on 'nodes-us-central1-a-tdxw' to be 'Running' but was 'Pending'
Jun 23 07:15:46.000: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019806001s
Jun 23 07:15:46.000: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on 'nodes-us-central1-a-tdxw' to be 'Running' but was 'Pending'
Jun 23 07:15:47.999: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019289988s
Jun 23 07:15:47.999: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on 'nodes-us-central1-a-tdxw' to be 'Running' but was 'Pending'
Jun 23 07:15:50.002: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02172952s
Jun 23 07:15:50.002: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on 'nodes-us-central1-a-tdxw' to be 'Running' but was 'Pending'
Jun 23 07:15:51.999: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019216495s
Jun 23 07:15:51.999: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on 'nodes-us-central1-a-tdxw' to be 'Running' but was 'Pending'
Jun 23 07:15:54.002: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 10.021660859s
Jun 23 07:15:54.002: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jun 23 07:15:54.002: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jun 23 07:15:54.002: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-7890 logs logs-generator logs-generator'
Jun 23 07:15:54.086: INFO: stderr: ""
... skipping 36 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl logs
  test/e2e/kubectl/kubectl.go:1558
    should be able to retrieve and filter logs  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":6,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:01.507: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver "csi-hostpath" does not support topology - skipping

      test/e2e/storage/testsuites/topology.go:93
------------------------------
... skipping 172 lines ...
• [SLOW TEST:62.333 seconds]
[sig-network] Conntrack
test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  test/e2e/network/conntrack.go:132
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":3,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:02.539: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 11 lines ...
      Driver local doesn't support InlineVolume -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:15:23.365: INFO: >>> kubeConfig: /root/.kube/config
... skipping 35 lines ...
Jun 23 07:15:40.100: INFO: PersistentVolumeClaim pvc-zrzgz found but phase is Pending instead of Bound.
Jun 23 07:15:42.108: INFO: PersistentVolumeClaim pvc-zrzgz found and phase=Bound (14.072918943s)
Jun 23 07:15:42.108: INFO: Waiting up to 3m0s for PersistentVolume local-m67tw to have phase Bound
Jun 23 07:15:42.115: INFO: PersistentVolume local-m67tw found and phase=Bound (7.521994ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mhnr
STEP: Creating a pod to test subpath
Jun 23 07:15:42.133: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mhnr" in namespace "provisioning-6819" to be "Succeeded or Failed"
Jun 23 07:15:42.144: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Pending", Reason="", readiness=false. Elapsed: 11.066246ms
Jun 23 07:15:44.148: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01458829s
Jun 23 07:15:46.155: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022214309s
Jun 23 07:15:48.151: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018387713s
Jun 23 07:15:50.152: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019172291s
Jun 23 07:15:52.148: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015231984s
Jun 23 07:15:54.148: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.015333188s
STEP: Saw pod success
Jun 23 07:15:54.148: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr" satisfied condition "Succeeded or Failed"
Jun 23 07:15:54.155: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod pod-subpath-test-preprovisionedpv-mhnr container test-container-subpath-preprovisionedpv-mhnr: <nil>
STEP: delete the pod
Jun 23 07:15:54.179: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mhnr to disappear
Jun 23 07:15:54.185: INFO: Pod pod-subpath-test-preprovisionedpv-mhnr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mhnr
Jun 23 07:15:54.185: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mhnr" in namespace "provisioning-6819"
STEP: Creating pod pod-subpath-test-preprovisionedpv-mhnr
STEP: Creating a pod to test subpath
Jun 23 07:15:54.207: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mhnr" in namespace "provisioning-6819" to be "Succeeded or Failed"
Jun 23 07:15:54.212: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.593089ms
Jun 23 07:15:56.216: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009218721s
Jun 23 07:15:58.215: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008469439s
Jun 23 07:16:00.216: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009349402s
Jun 23 07:16:02.217: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.010417769s
STEP: Saw pod success
Jun 23 07:16:02.217: INFO: Pod "pod-subpath-test-preprovisionedpv-mhnr" satisfied condition "Succeeded or Failed"
Jun 23 07:16:02.221: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod pod-subpath-test-preprovisionedpv-mhnr container test-container-subpath-preprovisionedpv-mhnr: <nil>
STEP: delete the pod
Jun 23 07:16:02.243: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mhnr to disappear
Jun 23 07:16:02.248: INFO: Pod pod-subpath-test-preprovisionedpv-mhnr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mhnr
Jun 23 07:16:02.248: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mhnr" in namespace "provisioning-6819"
... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_configmap.go:112
STEP: Creating configMap with name projected-configmap-test-volume-map-baf6aa17-f32a-4ec0-be29-83d2a9c1bfbd
STEP: Creating a pod to test consume configMaps
Jun 23 07:15:55.902: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d3870644-199b-4586-a35a-8494701297a5" in namespace "projected-5888" to be "Succeeded or Failed"
Jun 23 07:15:55.907: INFO: Pod "pod-projected-configmaps-d3870644-199b-4586-a35a-8494701297a5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.12566ms
Jun 23 07:15:57.911: INFO: Pod "pod-projected-configmaps-d3870644-199b-4586-a35a-8494701297a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008740012s
Jun 23 07:15:59.913: INFO: Pod "pod-projected-configmaps-d3870644-199b-4586-a35a-8494701297a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01058557s
Jun 23 07:16:01.912: INFO: Pod "pod-projected-configmaps-d3870644-199b-4586-a35a-8494701297a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009674311s
Jun 23 07:16:03.912: INFO: Pod "pod-projected-configmaps-d3870644-199b-4586-a35a-8494701297a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.009270095s
STEP: Saw pod success
Jun 23 07:16:03.912: INFO: Pod "pod-projected-configmaps-d3870644-199b-4586-a35a-8494701297a5" satisfied condition "Succeeded or Failed"
Jun 23 07:16:03.916: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-projected-configmaps-d3870644-199b-4586-a35a-8494701297a5 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:16:03.944: INFO: Waiting for pod pod-projected-configmaps-d3870644-199b-4586-a35a-8494701297a5 to disappear
Jun 23 07:16:03.959: INFO: Pod pod-projected-configmaps-d3870644-199b-4586-a35a-8494701297a5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.150 seconds]
[sig-storage] Projected configMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_configmap.go:112
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:03.995: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/framework/framework.go:187

... skipping 69 lines ...
  test/e2e/storage/subpath.go:40
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating pod pod-subpath-test-downwardapi-m4vj
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 07:15:36.363: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-m4vj" in namespace "subpath-8563" to be "Succeeded or Failed"
Jun 23 07:15:36.371: INFO: Pod "pod-subpath-test-downwardapi-m4vj": Phase="Pending", Reason="", readiness=false. Elapsed: 7.231171ms
Jun 23 07:15:38.376: INFO: Pod "pod-subpath-test-downwardapi-m4vj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012708419s
Jun 23 07:15:40.376: INFO: Pod "pod-subpath-test-downwardapi-m4vj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012522865s
Jun 23 07:15:42.375: INFO: Pod "pod-subpath-test-downwardapi-m4vj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011195296s
Jun 23 07:15:44.376: INFO: Pod "pod-subpath-test-downwardapi-m4vj": Phase="Running", Reason="", readiness=true. Elapsed: 8.012768029s
Jun 23 07:15:46.385: INFO: Pod "pod-subpath-test-downwardapi-m4vj": Phase="Running", Reason="", readiness=true. Elapsed: 10.021330394s
... skipping 4 lines ...
Jun 23 07:15:56.375: INFO: Pod "pod-subpath-test-downwardapi-m4vj": Phase="Running", Reason="", readiness=true. Elapsed: 20.011359545s
Jun 23 07:15:58.375: INFO: Pod "pod-subpath-test-downwardapi-m4vj": Phase="Running", Reason="", readiness=true. Elapsed: 22.011586134s
Jun 23 07:16:00.375: INFO: Pod "pod-subpath-test-downwardapi-m4vj": Phase="Running", Reason="", readiness=true. Elapsed: 24.012012065s
Jun 23 07:16:02.378: INFO: Pod "pod-subpath-test-downwardapi-m4vj": Phase="Running", Reason="", readiness=true. Elapsed: 26.014607263s
Jun 23 07:16:04.381: INFO: Pod "pod-subpath-test-downwardapi-m4vj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.017073192s
STEP: Saw pod success
Jun 23 07:16:04.381: INFO: Pod "pod-subpath-test-downwardapi-m4vj" satisfied condition "Succeeded or Failed"
Jun 23 07:16:04.384: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod pod-subpath-test-downwardapi-m4vj container test-container-subpath-downwardapi-m4vj: <nil>
STEP: delete the pod
Jun 23 07:16:04.401: INFO: Waiting for pod pod-subpath-test-downwardapi-m4vj to disappear
Jun 23 07:16:04.408: INFO: Pod pod-subpath-test-downwardapi-m4vj no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-m4vj
Jun 23 07:16:04.408: INFO: Deleting pod "pod-subpath-test-downwardapi-m4vj" in namespace "subpath-8563"
... skipping 8 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:36
    should support subpaths with downward pod [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:04.446: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 66 lines ...
Jun 23 07:15:42.539: INFO: ExecWithOptions: Clientset creation
Jun 23 07:15:42.540: INFO: ExecWithOptions: execute(POST https://35.225.255.125/api/v1/namespaces/sctp-3511/pods/hostexec-nodes-us-central1-a-50vm-rlp96/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 23 07:15:42.742: INFO: exec nodes-us-central1-a-50vm: command:   lsmod | grep sctp
Jun 23 07:15:42.742: INFO: exec nodes-us-central1-a-50vm: stdout:    ""
Jun 23 07:15:42.742: INFO: exec nodes-us-central1-a-50vm: stderr:    ""
Jun 23 07:15:42.742: INFO: exec nodes-us-central1-a-50vm: exit code: 0
Jun 23 07:15:42.742: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Jun 23 07:15:42.742: INFO: the sctp module is not loaded on node: nodes-us-central1-a-50vm
Jun 23 07:15:42.742: INFO: Executing cmd "lsmod | grep sctp" on node nodes-us-central1-a-nk1s
Jun 23 07:15:42.751: INFO: Waiting up to 5m0s for pod "hostexec-nodes-us-central1-a-nk1s-9t5hj" in namespace "sctp-3511" to be "running"
Jun 23 07:15:42.766: INFO: Pod "hostexec-nodes-us-central1-a-nk1s-9t5hj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.583071ms
Jun 23 07:15:44.771: INFO: Pod "hostexec-nodes-us-central1-a-nk1s-9t5hj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019937124s
Jun 23 07:15:46.772: INFO: Pod "hostexec-nodes-us-central1-a-nk1s-9t5hj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020958s
... skipping 4 lines ...
Jun 23 07:15:48.782: INFO: ExecWithOptions: Clientset creation
Jun 23 07:15:48.782: INFO: ExecWithOptions: execute(POST https://35.225.255.125/api/v1/namespaces/sctp-3511/pods/hostexec-nodes-us-central1-a-nk1s-9t5hj/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 23 07:15:48.906: INFO: exec nodes-us-central1-a-nk1s: command:   lsmod | grep sctp
Jun 23 07:15:48.906: INFO: exec nodes-us-central1-a-nk1s: stdout:    ""
Jun 23 07:15:48.906: INFO: exec nodes-us-central1-a-nk1s: stderr:    ""
Jun 23 07:15:48.906: INFO: exec nodes-us-central1-a-nk1s: exit code: 0
Jun 23 07:15:48.906: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Jun 23 07:15:48.906: INFO: the sctp module is not loaded on node: nodes-us-central1-a-nk1s
STEP: Deleting pod hostexec-nodes-us-central1-a-50vm-rlp96 in namespace sctp-3511
STEP: Deleting pod hostexec-nodes-us-central1-a-nk1s-9t5hj in namespace sctp-3511
STEP: creating service sctp-endpoint-test in namespace sctp-3511
Jun 23 07:15:49.049: INFO: Service sctp-endpoint-test in namespace sctp-3511 found.
STEP: validating endpoints do not exist yet
... skipping 55 lines ...
• [SLOW TEST:39.245 seconds]
[sig-network] SCTP [LinuxOnly]
test/e2e/network/common/framework.go:23
  should allow creating a basic SCTP service with pod and endpoints
  test/e2e/network/service.go:4070
------------------------------
{"msg":"PASSED [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints","total":-1,"completed":5,"skipped":36,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 39 lines ...
Jun 23 07:15:53.668: INFO: PersistentVolumeClaim pvc-bv24q found but phase is Pending instead of Bound.
Jun 23 07:15:55.675: INFO: PersistentVolumeClaim pvc-bv24q found and phase=Bound (14.064637703s)
Jun 23 07:15:55.675: INFO: Waiting up to 3m0s for PersistentVolume local-jjxzw to have phase Bound
Jun 23 07:15:55.679: INFO: PersistentVolume local-jjxzw found and phase=Bound (4.259905ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2qh9
STEP: Creating a pod to test subpath
Jun 23 07:15:55.697: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2qh9" in namespace "provisioning-5194" to be "Succeeded or Failed"
Jun 23 07:15:55.705: INFO: Pod "pod-subpath-test-preprovisionedpv-2qh9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.484758ms
Jun 23 07:15:57.709: INFO: Pod "pod-subpath-test-preprovisionedpv-2qh9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012335896s
Jun 23 07:15:59.711: INFO: Pod "pod-subpath-test-preprovisionedpv-2qh9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014395021s
Jun 23 07:16:01.712: INFO: Pod "pod-subpath-test-preprovisionedpv-2qh9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015194239s
Jun 23 07:16:03.711: INFO: Pod "pod-subpath-test-preprovisionedpv-2qh9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01389699s
Jun 23 07:16:05.711: INFO: Pod "pod-subpath-test-preprovisionedpv-2qh9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.013863847s
Jun 23 07:16:07.719: INFO: Pod "pod-subpath-test-preprovisionedpv-2qh9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.021558878s
STEP: Saw pod success
Jun 23 07:16:07.719: INFO: Pod "pod-subpath-test-preprovisionedpv-2qh9" satisfied condition "Succeeded or Failed"
Jun 23 07:16:07.728: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod pod-subpath-test-preprovisionedpv-2qh9 container test-container-subpath-preprovisionedpv-2qh9: <nil>
STEP: delete the pod
Jun 23 07:16:07.873: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2qh9 to disappear
Jun 23 07:16:07.882: INFO: Pod pod-subpath-test-preprovisionedpv-2qh9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2qh9
Jun 23 07:16:07.882: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2qh9" in namespace "provisioning-5194"
... skipping 30 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:382
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:08.444: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  test/e2e/framework/framework.go:187

... skipping 52 lines ...
Jun 23 07:15:48.417: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-559 create -f -'
Jun 23 07:15:49.214: INFO: stderr: ""
Jun 23 07:15:49.214: INFO: stdout: "pod/httpd created\n"
Jun 23 07:15:49.214: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 07:15:49.214: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-559" to be "running and ready"
Jun 23 07:15:49.228: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.791025ms
Jun 23 07:15:49.228: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-nk1s' to be 'Running' but was 'Pending'
Jun 23 07:15:51.246: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031832695s
Jun 23 07:15:51.246: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-nk1s' to be 'Running' but was 'Pending'
Jun 23 07:15:53.233: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019179607s
Jun 23 07:15:53.234: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-nk1s' to be 'Running' but was 'Pending'
Jun 23 07:15:55.263: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048405474s
Jun 23 07:15:55.263: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-nk1s' to be 'Running' but was 'Pending'
Jun 23 07:15:57.232: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.017987301s
Jun 23 07:15:57.232: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-nk1s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC  }]
Jun 23 07:15:59.232: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.017353396s
Jun 23 07:15:59.232: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-nk1s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC  }]
Jun 23 07:16:01.232: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.018196435s
Jun 23 07:16:01.233: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-nk1s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:15:49 +0000 UTC  }]
Jun 23 07:16:03.233: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 14.019123386s
Jun 23 07:16:03.233: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 07:16:03.234: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should handle in-cluster config
  test/e2e/kubectl/kubectl.go:682
STEP: adding rbac permissions
... skipping 67 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Jun 23 07:16:06.006: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-559 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Jun 23 07:16:06.434: INFO: rc: 1
STEP: trying to use kubectl with invalid token
Jun 23 07:16:06.434: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-559 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Jun 23 07:16:06.768: INFO: rc: 1
Jun 23 07:16:06.768: INFO: got err error running /logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-559 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0623 07:16:06.707454     186 merged_client_builder.go:163] Using in-cluster namespace
I0623 07:16:06.707727     186 merged_client_builder.go:121] Using in-cluster configuration
I0623 07:16:06.731941     186 merged_client_builder.go:121] Using in-cluster configuration
I0623 07:16:06.732301     186 round_trippers.go:463] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-559/pods?limit=500
I0623 07:16:06.732312     186 round_trippers.go:469] Request Headers:
... skipping 7 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
error: You must be logged in to the server (Unauthorized)

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 1

error:
exit status 1
STEP: trying to use kubectl with invalid server
Jun 23 07:16:06.768: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-559 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Jun 23 07:16:07.175: INFO: rc: 1
Jun 23 07:16:07.175: INFO: got err error running /logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-559 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0623 07:16:07.071943     197 merged_client_builder.go:163] Using in-cluster namespace
I0623 07:16:07.139784     197 round_trippers.go:553] GET http://invalid/api?timeout=32s  in 67 milliseconds
I0623 07:16:07.139897     197 cached_discovery.go:119] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0623 07:16:07.149788     197 round_trippers.go:553] GET http://invalid/api?timeout=32s  in 9 milliseconds
I0623 07:16:07.149870     197 cached_discovery.go:119] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0623 07:16:07.149899     197 shortcut.go:100] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0623 07:16:07.156258     197 round_trippers.go:553] GET http://invalid/api?timeout=32s  in 6 milliseconds
I0623 07:16:07.156327     197 cached_discovery.go:119] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0623 07:16:07.162175     197 round_trippers.go:553] GET http://invalid/api?timeout=32s  in 5 milliseconds
I0623 07:16:07.162253     197 cached_discovery.go:119] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0623 07:16:07.166335     197 round_trippers.go:553] GET http://invalid/api?timeout=32s  in 3 milliseconds
I0623 07:16:07.166414     197 cached_discovery.go:119] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0623 07:16:07.166466     197 helpers.go:240] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: no such host
Unable to connect to the server: dial tcp: lookup invalid on 100.64.0.10:53: no such host

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 1

error:
exit status 1
STEP: trying to use kubectl with invalid namespace
Jun 23 07:16:07.175: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-559 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Jun 23 07:16:07.517: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Jun 23 07:16:07.517: INFO: stdout: "I0623 07:16:07.475903     207 merged_client_builder.go:121] Using in-cluster configuration\nI0623 07:16:07.492074     207 merged_client_builder.go:121] Using in-cluster configuration\nI0623 07:16:07.504677     207 round_trippers.go:553] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 12 milliseconds\nNo resources found in invalid namespace.\n"
Jun 23 07:16:07.517: INFO: stdout: I0623 07:16:07.475903     207 merged_client_builder.go:121] Using in-cluster configuration
... skipping 58 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:407
    should handle in-cluster config
    test/e2e/kubectl/kubectl.go:682
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":6,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:08.711: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  test/e2e/node/security_context.go:71
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Jun 23 07:16:00.949: INFO: Waiting up to 5m0s for pod "security-context-3b00c823-7dab-4078-ba6c-8a4395cd0845" in namespace "security-context-219" to be "Succeeded or Failed"
Jun 23 07:16:00.960: INFO: Pod "security-context-3b00c823-7dab-4078-ba6c-8a4395cd0845": Phase="Pending", Reason="", readiness=false. Elapsed: 11.455337ms
Jun 23 07:16:02.966: INFO: Pod "security-context-3b00c823-7dab-4078-ba6c-8a4395cd0845": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016863273s
Jun 23 07:16:04.966: INFO: Pod "security-context-3b00c823-7dab-4078-ba6c-8a4395cd0845": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017475976s
Jun 23 07:16:06.975: INFO: Pod "security-context-3b00c823-7dab-4078-ba6c-8a4395cd0845": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026400423s
Jun 23 07:16:08.965: INFO: Pod "security-context-3b00c823-7dab-4078-ba6c-8a4395cd0845": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016726517s
STEP: Saw pod success
Jun 23 07:16:08.966: INFO: Pod "security-context-3b00c823-7dab-4078-ba6c-8a4395cd0845" satisfied condition "Succeeded or Failed"
Jun 23 07:16:08.968: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod security-context-3b00c823-7dab-4078-ba6c-8a4395cd0845 container test-container: <nil>
STEP: delete the pod
Jun 23 07:16:09.004: INFO: Waiting for pod security-context-3b00c823-7dab-4078-ba6c-8a4395cd0845 to disappear
Jun 23 07:16:09.024: INFO: Pod security-context-3b00c823-7dab-4078-ba6c-8a4395cd0845 no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.141 seconds]
[sig-node] Security Context
test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  test/e2e/node/security_context.go:71
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":6,"skipped":107,"failed":0}
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:16:09.055: INFO: >>> kubeConfig: /root/.kube/config
... skipping 67 lines ...
• [SLOW TEST:7.091 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:11.162: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 182 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver hostPath doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 118 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:240
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":16,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 63 lines ...
Jun 23 07:15:26.058: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 07:15:28.076: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030321843s
Jun 23 07:15:28.076: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 07:15:30.058: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.012257725s
Jun 23 07:15:30.058: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 23 07:15:30.058: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 23 07:15:30.058: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=services-1443 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.156.36:80 && echo service-down-failed'
Jun 23 07:15:32.431: INFO: rc: 28
Jun 23 07:15:32.431: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.71.156.36:80 && echo service-down-failed" in pod services-1443/verify-service-down-host-exec-pod: error running /logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=services-1443 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.156.36:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.71.156.36:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1443
STEP: adding service-proxy-name label
STEP: verifying service is not up
Jun 23 07:15:32.503: INFO: Creating new host exec pod
... skipping 6 lines ...
Jun 23 07:15:36.523: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 07:15:38.524: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013270379s
Jun 23 07:15:38.524: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 07:15:40.534: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.022989723s
Jun 23 07:15:40.534: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 23 07:15:40.534: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 23 07:15:40.534: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=services-1443 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.67.43.229:80 && echo service-down-failed'
Jun 23 07:15:42.768: INFO: rc: 28
Jun 23 07:15:42.768: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.67.43.229:80 && echo service-down-failed" in pod services-1443/verify-service-down-host-exec-pod: error running /logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=services-1443 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.67.43.229:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.67.43.229:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1443
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Jun 23 07:15:42.802: INFO: Creating new host exec pod
... skipping 43 lines ...
Jun 23 07:16:05.775: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 07:16:07.807: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040283231s
Jun 23 07:16:07.807: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 07:16:09.778: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.010837011s
Jun 23 07:16:09.778: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 23 07:16:09.778: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 23 07:16:09.778: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=services-1443 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.156.36:80 && echo service-down-failed'
Jun 23 07:16:11.935: INFO: rc: 28
Jun 23 07:16:11.935: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.71.156.36:80 && echo service-down-failed" in pod services-1443/verify-service-down-host-exec-pod: error running /logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=services-1443 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.156.36:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.71.156.36:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1443
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:187
Jun 23 07:16:11.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:96.411 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  test/e2e/network/service.go:2156
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:11.971: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:187

... skipping 86 lines ...
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 23 07:16:04.504: INFO: Waiting up to 5m0s for pod "security-context-feebee97-c3f3-485f-9291-4dcfe1872b0b" in namespace "security-context-4272" to be "Succeeded or Failed"
Jun 23 07:16:04.513: INFO: Pod "security-context-feebee97-c3f3-485f-9291-4dcfe1872b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.27681ms
Jun 23 07:16:06.518: INFO: Pod "security-context-feebee97-c3f3-485f-9291-4dcfe1872b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013962322s
Jun 23 07:16:08.521: INFO: Pod "security-context-feebee97-c3f3-485f-9291-4dcfe1872b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01619875s
Jun 23 07:16:10.518: INFO: Pod "security-context-feebee97-c3f3-485f-9291-4dcfe1872b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01396792s
Jun 23 07:16:12.518: INFO: Pod "security-context-feebee97-c3f3-485f-9291-4dcfe1872b0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.013092699s
STEP: Saw pod success
Jun 23 07:16:12.518: INFO: Pod "security-context-feebee97-c3f3-485f-9291-4dcfe1872b0b" satisfied condition "Succeeded or Failed"
Jun 23 07:16:12.521: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod security-context-feebee97-c3f3-485f-9291-4dcfe1872b0b container test-container: <nil>
STEP: delete the pod
Jun 23 07:16:12.540: INFO: Waiting for pod security-context-feebee97-c3f3-485f-9291-4dcfe1872b0b to disappear
Jun 23 07:16:12.545: INFO: Pod security-context-feebee97-c3f3-485f-9291-4dcfe1872b0b no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.093 seconds]
[sig-node] Security Context
test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:12.584: INFO: Only supported for providers [vsphere] (not gce)
... skipping 56 lines ...
      Driver emptydir doesn't support PreprovisionedPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":11,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:16:02.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 72 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":6,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:15.319: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:187

... skipping 95 lines ...
Jun 23 07:15:10.008: INFO: PersistentVolumeClaim csi-hostpath5dwkd found but phase is Pending instead of Bound.
Jun 23 07:15:12.012: INFO: PersistentVolumeClaim csi-hostpath5dwkd found but phase is Pending instead of Bound.
Jun 23 07:15:14.017: INFO: PersistentVolumeClaim csi-hostpath5dwkd found but phase is Pending instead of Bound.
Jun 23 07:15:16.021: INFO: PersistentVolumeClaim csi-hostpath5dwkd found and phase=Bound (38.115690679s)
STEP: Creating pod pod-subpath-test-dynamicpv-hhdv
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 07:15:16.033: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hhdv" in namespace "provisioning-153" to be "Succeeded or Failed"
Jun 23 07:15:16.043: INFO: Pod "pod-subpath-test-dynamicpv-hhdv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.360095ms
Jun 23 07:15:18.047: INFO: Pod "pod-subpath-test-dynamicpv-hhdv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01418459s
Jun 23 07:15:20.047: INFO: Pod "pod-subpath-test-dynamicpv-hhdv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014597279s
Jun 23 07:15:22.059: INFO: Pod "pod-subpath-test-dynamicpv-hhdv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026552392s
Jun 23 07:15:24.056: INFO: Pod "pod-subpath-test-dynamicpv-hhdv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023641068s
Jun 23 07:15:26.048: INFO: Pod "pod-subpath-test-dynamicpv-hhdv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015469755s
... skipping 10 lines ...
Jun 23 07:15:48.096: INFO: Pod "pod-subpath-test-dynamicpv-hhdv": Phase="Running", Reason="", readiness=true. Elapsed: 32.06348081s
Jun 23 07:15:50.048: INFO: Pod "pod-subpath-test-dynamicpv-hhdv": Phase="Running", Reason="", readiness=true. Elapsed: 34.014722816s
Jun 23 07:15:52.047: INFO: Pod "pod-subpath-test-dynamicpv-hhdv": Phase="Running", Reason="", readiness=true. Elapsed: 36.014561631s
Jun 23 07:15:54.048: INFO: Pod "pod-subpath-test-dynamicpv-hhdv": Phase="Running", Reason="", readiness=true. Elapsed: 38.014897203s
Jun 23 07:15:56.050: INFO: Pod "pod-subpath-test-dynamicpv-hhdv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.016980658s
STEP: Saw pod success
Jun 23 07:15:56.050: INFO: Pod "pod-subpath-test-dynamicpv-hhdv" satisfied condition "Succeeded or Failed"
Jun 23 07:15:56.054: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod pod-subpath-test-dynamicpv-hhdv container test-container-subpath-dynamicpv-hhdv: <nil>
STEP: delete the pod
Jun 23 07:15:56.077: INFO: Waiting for pod pod-subpath-test-dynamicpv-hhdv to disappear
Jun 23 07:15:56.082: INFO: Pod pod-subpath-test-dynamicpv-hhdv no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-hhdv
Jun 23 07:15:56.082: INFO: Deleting pod "pod-subpath-test-dynamicpv-hhdv" in namespace "provisioning-153"
... skipping 61 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:232
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:15.599: INFO: Only supported for providers [aws] (not gce)
... skipping 26 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:16:08.801: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f186eace-e999-4e34-9311-9bb2ce66d73a" in namespace "projected-4253" to be "Succeeded or Failed"
Jun 23 07:16:08.805: INFO: Pod "downwardapi-volume-f186eace-e999-4e34-9311-9bb2ce66d73a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100695ms
Jun 23 07:16:10.812: INFO: Pod "downwardapi-volume-f186eace-e999-4e34-9311-9bb2ce66d73a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010560317s
Jun 23 07:16:12.825: INFO: Pod "downwardapi-volume-f186eace-e999-4e34-9311-9bb2ce66d73a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02357256s
Jun 23 07:16:14.815: INFO: Pod "downwardapi-volume-f186eace-e999-4e34-9311-9bb2ce66d73a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013448082s
Jun 23 07:16:16.810: INFO: Pod "downwardapi-volume-f186eace-e999-4e34-9311-9bb2ce66d73a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.008357076s
STEP: Saw pod success
Jun 23 07:16:16.810: INFO: Pod "downwardapi-volume-f186eace-e999-4e34-9311-9bb2ce66d73a" satisfied condition "Succeeded or Failed"
Jun 23 07:16:16.815: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod downwardapi-volume-f186eace-e999-4e34-9311-9bb2ce66d73a container client-container: <nil>
STEP: delete the pod
Jun 23 07:16:16.839: INFO: Waiting for pod downwardapi-volume-f186eace-e999-4e34-9311-9bb2ce66d73a to disappear
Jun 23 07:16:16.845: INFO: Pod downwardapi-volume-f186eace-e999-4e34-9311-9bb2ce66d73a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.088 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:16.873: INFO: Only supported for providers [azure] (not gce)
... skipping 72 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun 23 07:16:09.168: INFO: Waiting up to 5m0s for pod "pod-8ae51c15-5a02-4890-80d6-83a5cfab0b47" in namespace "emptydir-2080" to be "Succeeded or Failed"
Jun 23 07:16:09.174: INFO: Pod "pod-8ae51c15-5a02-4890-80d6-83a5cfab0b47": Phase="Pending", Reason="", readiness=false. Elapsed: 5.422725ms
Jun 23 07:16:11.177: INFO: Pod "pod-8ae51c15-5a02-4890-80d6-83a5cfab0b47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008888328s
Jun 23 07:16:13.186: INFO: Pod "pod-8ae51c15-5a02-4890-80d6-83a5cfab0b47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01767497s
Jun 23 07:16:15.178: INFO: Pod "pod-8ae51c15-5a02-4890-80d6-83a5cfab0b47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009959091s
Jun 23 07:16:17.190: INFO: Pod "pod-8ae51c15-5a02-4890-80d6-83a5cfab0b47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021306565s
STEP: Saw pod success
Jun 23 07:16:17.190: INFO: Pod "pod-8ae51c15-5a02-4890-80d6-83a5cfab0b47" satisfied condition "Succeeded or Failed"
Jun 23 07:16:17.204: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-8ae51c15-5a02-4890-80d6-83a5cfab0b47 container test-container: <nil>
STEP: delete the pod
Jun 23 07:16:17.232: INFO: Waiting for pod pod-8ae51c15-5a02-4890-80d6-83a5cfab0b47 to disappear
Jun 23 07:16:17.248: INFO: Pod pod-8ae51c15-5a02-4890-80d6-83a5cfab0b47 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 6 lines ...
test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:48
    files with FSGroup ownership should support (root,0644,tmpfs)
    test/e2e/common/storage/empty_dir.go:67
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":7,"skipped":110,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:17.305: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 137 lines ...
• [SLOW TEST:18.132 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should support pod readiness gates [NodeConformance]
  test/e2e/common/node/pods.go:768
------------------------------
{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeConformance]","total":-1,"completed":6,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:18.526: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/downwardapi_volume.go:108
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:16:08.542: INFO: Waiting up to 5m0s for pod "metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5" in namespace "downward-api-1681" to be "Succeeded or Failed"
Jun 23 07:16:08.555: INFO: Pod "metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.955192ms
Jun 23 07:16:10.559: INFO: Pod "metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016567684s
Jun 23 07:16:12.562: INFO: Pod "metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0191595s
Jun 23 07:16:14.576: INFO: Pod "metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033805635s
Jun 23 07:16:16.560: INFO: Pod "metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017185325s
Jun 23 07:16:18.563: INFO: Pod "metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02043145s
STEP: Saw pod success
Jun 23 07:16:18.563: INFO: Pod "metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5" satisfied condition "Succeeded or Failed"
Jun 23 07:16:18.570: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5 container client-container: <nil>
STEP: delete the pod
Jun 23 07:16:18.623: INFO: Waiting for pod metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5 to disappear
Jun 23 07:16:18.634: INFO: Pod metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:10.201 seconds]
[sig-storage] Downward API volume
test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/downwardapi_volume.go:108
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":14,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:18.718: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 88 lines ...
Jun 23 07:15:09.520: INFO: PersistentVolumeClaim csi-hostpath7znnx found but phase is Pending instead of Bound.
Jun 23 07:15:11.529: INFO: PersistentVolumeClaim csi-hostpath7znnx found but phase is Pending instead of Bound.
Jun 23 07:15:13.534: INFO: PersistentVolumeClaim csi-hostpath7znnx found but phase is Pending instead of Bound.
Jun 23 07:15:15.537: INFO: PersistentVolumeClaim csi-hostpath7znnx found and phase=Bound (14.079768516s)
STEP: Creating pod pod-subpath-test-dynamicpv-p9xd
STEP: Creating a pod to test subpath
Jun 23 07:15:15.557: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-p9xd" in namespace "provisioning-5093" to be "Succeeded or Failed"
Jun 23 07:15:15.572: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.087836ms
Jun 23 07:15:17.576: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019157493s
Jun 23 07:15:19.576: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019130361s
Jun 23 07:15:21.576: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01920946s
Jun 23 07:15:23.581: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024185978s
Jun 23 07:15:25.576: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018906212s
... skipping 4 lines ...
Jun 23 07:15:35.582: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.025279121s
Jun 23 07:15:37.576: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.019455409s
Jun 23 07:15:39.591: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.034367771s
Jun 23 07:15:41.590: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.033301816s
Jun 23 07:15:43.580: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.02375713s
STEP: Saw pod success
Jun 23 07:15:43.581: INFO: Pod "pod-subpath-test-dynamicpv-p9xd" satisfied condition "Succeeded or Failed"
Jun 23 07:15:43.592: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod pod-subpath-test-dynamicpv-p9xd container test-container-subpath-dynamicpv-p9xd: <nil>
STEP: delete the pod
Jun 23 07:15:43.646: INFO: Waiting for pod pod-subpath-test-dynamicpv-p9xd to disappear
Jun 23 07:15:43.651: INFO: Pod pod-subpath-test-dynamicpv-p9xd no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-p9xd
Jun 23 07:15:43.651: INFO: Deleting pod "pod-subpath-test-dynamicpv-p9xd" in namespace "provisioning-5093"
STEP: Creating pod pod-subpath-test-dynamicpv-p9xd
STEP: Creating a pod to test subpath
Jun 23 07:15:43.673: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-p9xd" in namespace "provisioning-5093" to be "Succeeded or Failed"
Jun 23 07:15:43.688: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.297128ms
Jun 23 07:15:45.700: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026677607s
Jun 23 07:15:47.702: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028926409s
Jun 23 07:15:49.693: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020425725s
Jun 23 07:15:51.694: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020575696s
Jun 23 07:15:53.692: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019008095s
Jun 23 07:15:55.696: INFO: Pod "pod-subpath-test-dynamicpv-p9xd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.022874157s
STEP: Saw pod success
Jun 23 07:15:55.697: INFO: Pod "pod-subpath-test-dynamicpv-p9xd" satisfied condition "Succeeded or Failed"
Jun 23 07:15:55.705: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod pod-subpath-test-dynamicpv-p9xd container test-container-subpath-dynamicpv-p9xd: <nil>
STEP: delete the pod
Jun 23 07:15:55.771: INFO: Waiting for pod pod-subpath-test-dynamicpv-p9xd to disappear
Jun 23 07:15:55.783: INFO: Pod pod-subpath-test-dynamicpv-p9xd no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-p9xd
Jun 23 07:15:55.783: INFO: Deleting pod "pod-subpath-test-dynamicpv-p9xd" in namespace "provisioning-5093"
... skipping 196 lines ...
test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  test/e2e/storage/csi_mock_volume.go:639
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    test/e2e/storage/csi_mock_volume.go:668
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] API priority and fairness
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 35 lines ...
      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":4,"skipped":29,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:16:16.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun 23 07:16:17.072: INFO: Waiting up to 5m0s for pod "pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc" in namespace "emptydir-201" to be "Succeeded or Failed"
Jun 23 07:16:17.101: INFO: Pod "pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.541155ms
Jun 23 07:16:19.107: INFO: Pod "pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034838153s
Jun 23 07:16:21.110: INFO: Pod "pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037139497s
Jun 23 07:16:23.106: INFO: Pod "pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033301411s
Jun 23 07:16:25.107: INFO: Pod "pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034563224s
Jun 23 07:16:27.106: INFO: Pod "pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.033537984s
STEP: Saw pod success
Jun 23 07:16:27.106: INFO: Pod "pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc" satisfied condition "Succeeded or Failed"
Jun 23 07:16:27.109: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc container test-container: <nil>
STEP: delete the pod
Jun 23 07:16:27.127: INFO: Waiting for pod pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc to disappear
Jun 23 07:16:27.130: INFO: Pod pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:10.149 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":82,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:15:16.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
• [SLOW TEST:72.934 seconds]
[sig-storage] Projected configMap
test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:29.225: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 32 lines ...
      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:2079
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet MinReadySeconds should be honored when enabled","total":-1,"completed":5,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:16:01.346: INFO: >>> kubeConfig: /root/.kube/config
... skipping 22 lines ...
Jun 23 07:16:09.527: INFO: PersistentVolumeClaim pvc-dmskb found but phase is Pending instead of Bound.
Jun 23 07:16:11.532: INFO: PersistentVolumeClaim pvc-dmskb found and phase=Bound (6.022117615s)
Jun 23 07:16:11.532: INFO: Waiting up to 3m0s for PersistentVolume local-fmlq8 to have phase Bound
Jun 23 07:16:11.536: INFO: PersistentVolume local-fmlq8 found and phase=Bound (3.879518ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rzxb
STEP: Creating a pod to test subpath
Jun 23 07:16:11.549: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rzxb" in namespace "provisioning-4439" to be "Succeeded or Failed"
Jun 23 07:16:11.553: INFO: Pod "pod-subpath-test-preprovisionedpv-rzxb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317607ms
Jun 23 07:16:13.561: INFO: Pod "pod-subpath-test-preprovisionedpv-rzxb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012562225s
Jun 23 07:16:15.559: INFO: Pod "pod-subpath-test-preprovisionedpv-rzxb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010655568s
Jun 23 07:16:17.561: INFO: Pod "pod-subpath-test-preprovisionedpv-rzxb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012187484s
Jun 23 07:16:19.561: INFO: Pod "pod-subpath-test-preprovisionedpv-rzxb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012240395s
Jun 23 07:16:21.559: INFO: Pod "pod-subpath-test-preprovisionedpv-rzxb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.01053823s
Jun 23 07:16:23.565: INFO: Pod "pod-subpath-test-preprovisionedpv-rzxb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.016047833s
Jun 23 07:16:25.560: INFO: Pod "pod-subpath-test-preprovisionedpv-rzxb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.011005511s
Jun 23 07:16:27.559: INFO: Pod "pod-subpath-test-preprovisionedpv-rzxb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.009876561s
Jun 23 07:16:29.559: INFO: Pod "pod-subpath-test-preprovisionedpv-rzxb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.010350709s
STEP: Saw pod success
Jun 23 07:16:29.559: INFO: Pod "pod-subpath-test-preprovisionedpv-rzxb" satisfied condition "Succeeded or Failed"
Jun 23 07:16:29.564: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod pod-subpath-test-preprovisionedpv-rzxb container test-container-subpath-preprovisionedpv-rzxb: <nil>
STEP: delete the pod
Jun 23 07:16:29.587: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rzxb to disappear
Jun 23 07:16:29.596: INFO: Pod pod-subpath-test-preprovisionedpv-rzxb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rzxb
Jun 23 07:16:29.596: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rzxb" in namespace "provisioning-4439"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:382
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 98 lines ...
• [SLOW TEST:17.702 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":7,"skipped":57,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:36.262: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
... skipping 48 lines ...
      Driver "csi-hostpath" does not define supported mount option - skipping

      test/e2e/storage/testsuites/provisioning.go:189
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:15:57.917: INFO: >>> kubeConfig: /root/.kube/config
... skipping 29 lines ...
Jun 23 07:16:10.358: INFO: PersistentVolumeClaim pvc-dglwf found but phase is Pending instead of Bound.
Jun 23 07:16:12.363: INFO: PersistentVolumeClaim pvc-dglwf found and phase=Bound (10.045830803s)
Jun 23 07:16:12.363: INFO: Waiting up to 3m0s for PersistentVolume local-2qqth to have phase Bound
Jun 23 07:16:12.367: INFO: PersistentVolume local-2qqth found and phase=Bound (4.195517ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gjhj
STEP: Creating a pod to test subpath
Jun 23 07:16:12.401: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gjhj" in namespace "provisioning-100" to be "Succeeded or Failed"
Jun 23 07:16:12.408: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj": Phase="Pending", Reason="", readiness=false. Elapsed: 7.277197ms
Jun 23 07:16:14.429: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028285181s
Jun 23 07:16:16.413: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011900124s
Jun 23 07:16:18.444: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04334716s
Jun 23 07:16:20.441: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039960701s
Jun 23 07:16:22.415: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014529281s
... skipping 2 lines ...
Jun 23 07:16:28.412: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.011428538s
Jun 23 07:16:30.414: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj": Phase="Pending", Reason="", readiness=false. Elapsed: 18.012799424s
Jun 23 07:16:32.422: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj": Phase="Pending", Reason="", readiness=false. Elapsed: 20.021466693s
Jun 23 07:16:34.439: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj": Phase="Pending", Reason="", readiness=false. Elapsed: 22.03838266s
Jun 23 07:16:36.415: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.013854589s
STEP: Saw pod success
Jun 23 07:16:36.415: INFO: Pod "pod-subpath-test-preprovisionedpv-gjhj" satisfied condition "Succeeded or Failed"
Jun 23 07:16:36.419: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-subpath-test-preprovisionedpv-gjhj container test-container-subpath-preprovisionedpv-gjhj: <nil>
STEP: delete the pod
Jun 23 07:16:36.444: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gjhj to disappear
Jun 23 07:16:36.453: INFO: Pod pod-subpath-test-preprovisionedpv-gjhj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gjhj
Jun 23 07:16:36.454: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gjhj" in namespace "provisioning-100"
... skipping 30 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:221
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:36.884: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/framework/framework.go:187

... skipping 11 lines ...
      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:1577
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":6,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:16:19.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 36 lines ...
• [SLOW TEST:18.977 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":5,"skipped":6,"failed":0}

SSSS
------------------------------
[BeforeEach] version v1
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 358 lines ...
test/e2e/network/common/framework.go:23
  version v1
  test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 30 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating projection with secret that has name projected-secret-test-966815d9-3342-4c7b-a793-8826ff3b96ec
STEP: Creating a pod to test consume secrets
Jun 23 07:16:27.207: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469" in namespace "projected-9622" to be "Succeeded or Failed"
Jun 23 07:16:27.211: INFO: Pod "pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323252ms
Jun 23 07:16:29.221: INFO: Pod "pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014518739s
Jun 23 07:16:31.215: INFO: Pod "pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008619051s
Jun 23 07:16:33.217: INFO: Pod "pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010019953s
Jun 23 07:16:35.224: INFO: Pod "pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016937983s
Jun 23 07:16:37.216: INFO: Pod "pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469": Phase="Pending", Reason="", readiness=false. Elapsed: 10.009228869s
Jun 23 07:16:39.217: INFO: Pod "pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469": Phase="Pending", Reason="", readiness=false. Elapsed: 12.009806879s
Jun 23 07:16:41.216: INFO: Pod "pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469": Phase="Pending", Reason="", readiness=false. Elapsed: 14.008697762s
Jun 23 07:16:43.217: INFO: Pod "pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.010233511s
STEP: Saw pod success
Jun 23 07:16:43.217: INFO: Pod "pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469" satisfied condition "Succeeded or Failed"
Jun 23 07:16:43.221: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:16:43.305: INFO: Waiting for pod pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469 to disappear
Jun 23 07:16:43.310: INFO: Pod pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:16.156 seconds]
[sig-storage] Projected secret
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:43.335: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 68 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap configmap-6715/configmap-test-3ac05d3d-2df6-41e7-b629-02fdc5fa520a
STEP: Creating a pod to test consume configMaps
Jun 23 07:16:29.391: INFO: Waiting up to 5m0s for pod "pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed" in namespace "configmap-6715" to be "Succeeded or Failed"
Jun 23 07:16:29.411: INFO: Pod "pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed": Phase="Pending", Reason="", readiness=false. Elapsed: 20.5681ms
Jun 23 07:16:31.416: INFO: Pod "pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024748445s
Jun 23 07:16:33.424: INFO: Pod "pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033040362s
Jun 23 07:16:35.421: INFO: Pod "pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030313499s
Jun 23 07:16:37.417: INFO: Pod "pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026590909s
Jun 23 07:16:39.415: INFO: Pod "pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024243605s
Jun 23 07:16:41.416: INFO: Pod "pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025379091s
Jun 23 07:16:43.416: INFO: Pod "pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.024765073s
STEP: Saw pod success
Jun 23 07:16:43.416: INFO: Pod "pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed" satisfied condition "Succeeded or Failed"
Jun 23 07:16:43.422: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed container env-test: <nil>
STEP: delete the pod
Jun 23 07:16:43.462: INFO: Waiting for pod pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed to disappear
Jun 23 07:16:43.469: INFO: Pod pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:14.252 seconds]
[sig-node] ConfigMap
test/e2e/common/node/framework.go:23
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] NodeLease
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 9 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:16:43.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-9430" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease NodeLease should have OwnerReferences set","total":-1,"completed":10,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:43.537: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 64 lines ...
  test/e2e/common/node/runtime.go:43
    on terminated container
    test/e2e/common/node/runtime.go:136
      should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:45.014: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/framework/framework.go:187

... skipping 57 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:16:45.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2531" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":11,"skipped":106,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:45.318: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 61 lines ...
• [SLOW TEST:16.094 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should apply changes to a job status [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Job should apply changes to a job status [Conformance]","total":-1,"completed":7,"skipped":25,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:45.963: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 276 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] provisioning
    test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      test/e2e/storage/testsuites/provisioning.go:428
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":1,"skipped":14,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":8,"skipped":64,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:16:42.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-ad71d76d-9689-4fd5-9bd0-24fa62a6a787
STEP: Creating a pod to test consume configMaps
Jun 23 07:16:42.631: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-470f14d6-1ac2-4fc8-a19c-415c21820b9e" in namespace "projected-8222" to be "Succeeded or Failed"
Jun 23 07:16:42.653: INFO: Pod "pod-projected-configmaps-470f14d6-1ac2-4fc8-a19c-415c21820b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.646555ms
Jun 23 07:16:44.698: INFO: Pod "pod-projected-configmaps-470f14d6-1ac2-4fc8-a19c-415c21820b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066679723s
Jun 23 07:16:46.658: INFO: Pod "pod-projected-configmaps-470f14d6-1ac2-4fc8-a19c-415c21820b9e": Phase="Running", Reason="", readiness=false. Elapsed: 4.026638667s
Jun 23 07:16:48.660: INFO: Pod "pod-projected-configmaps-470f14d6-1ac2-4fc8-a19c-415c21820b9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028659173s
STEP: Saw pod success
Jun 23 07:16:48.660: INFO: Pod "pod-projected-configmaps-470f14d6-1ac2-4fc8-a19c-415c21820b9e" satisfied condition "Succeeded or Failed"
Jun 23 07:16:48.668: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-projected-configmaps-470f14d6-1ac2-4fc8-a19c-415c21820b9e container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:16:48.688: INFO: Waiting for pod pod-projected-configmaps-470f14d6-1ac2-4fc8-a19c-415c21820b9e to disappear
Jun 23 07:16:48.693: INFO: Pod pod-projected-configmaps-470f14d6-1ac2-4fc8-a19c-415c21820b9e no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.275 seconds]
[sig-storage] Projected configMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:48.774: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 99 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:16:48.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-8106" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":10,"skipped":79,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/projected_downwardapi.go:93
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:16:45.399: INFO: Waiting up to 5m0s for pod "metadata-volume-a4597fd6-006c-4401-9228-87415c3be231" in namespace "projected-5096" to be "Succeeded or Failed"
Jun 23 07:16:45.404: INFO: Pod "metadata-volume-a4597fd6-006c-4401-9228-87415c3be231": Phase="Pending", Reason="", readiness=false. Elapsed: 5.220743ms
Jun 23 07:16:47.411: INFO: Pod "metadata-volume-a4597fd6-006c-4401-9228-87415c3be231": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011542307s
Jun 23 07:16:49.409: INFO: Pod "metadata-volume-a4597fd6-006c-4401-9228-87415c3be231": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009584852s
STEP: Saw pod success
Jun 23 07:16:49.409: INFO: Pod "metadata-volume-a4597fd6-006c-4401-9228-87415c3be231" satisfied condition "Succeeded or Failed"
Jun 23 07:16:49.412: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod metadata-volume-a4597fd6-006c-4401-9228-87415c3be231 container client-container: <nil>
STEP: delete the pod
Jun 23 07:16:49.449: INFO: Waiting for pod metadata-volume-a4597fd6-006c-4401-9228-87415c3be231 to disappear
Jun 23 07:16:49.454: INFO: Pod metadata-volume-a4597fd6-006c-4401-9228-87415c3be231 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jun 23 07:16:49.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5096" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":12,"skipped":117,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:49.487: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 90 lines ...
  test/e2e/common/node/runtime.go:43
    on terminated container
    test/e2e/common/node/runtime.go:136
      should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:49.640: INFO: Only supported for providers [azure] (not gce)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-c30394c6-2f7a-4d00-a23a-d6ccc49077e1
STEP: Creating a pod to test consume configMaps
Jun 23 07:16:43.638: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae40fc52-9343-47d5-85f4-0e35367c7171" in namespace "configmap-9669" to be "Succeeded or Failed"
Jun 23 07:16:43.652: INFO: Pod "pod-configmaps-ae40fc52-9343-47d5-85f4-0e35367c7171": Phase="Pending", Reason="", readiness=false. Elapsed: 13.572572ms
Jun 23 07:16:45.656: INFO: Pod "pod-configmaps-ae40fc52-9343-47d5-85f4-0e35367c7171": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018225013s
Jun 23 07:16:47.687: INFO: Pod "pod-configmaps-ae40fc52-9343-47d5-85f4-0e35367c7171": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048868713s
Jun 23 07:16:49.657: INFO: Pod "pod-configmaps-ae40fc52-9343-47d5-85f4-0e35367c7171": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018858683s
STEP: Saw pod success
Jun 23 07:16:49.657: INFO: Pod "pod-configmaps-ae40fc52-9343-47d5-85f4-0e35367c7171" satisfied condition "Succeeded or Failed"
Jun 23 07:16:49.661: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod pod-configmaps-ae40fc52-9343-47d5-85f4-0e35367c7171 container configmap-volume-test: <nil>
STEP: delete the pod
Jun 23 07:16:49.682: INFO: Waiting for pod pod-configmaps-ae40fc52-9343-47d5-85f4-0e35367c7171 to disappear
Jun 23 07:16:49.685: INFO: Pod pod-configmaps-ae40fc52-9343-47d5-85f4-0e35367c7171 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 130 lines ...
    Requires at least 2 nodes (not 0)

    test/e2e/framework/network/utils.go:782
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":19,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:49.709: INFO: Only supported for providers [vsphere] (not gce)
... skipping 170 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:16:49.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-7578" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":6,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:49.901: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 69 lines ...
Jun 23 07:16:49.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-os-rejection
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should reject pod when the node OS doesn't match pod's OS
  test/e2e/common/node/pod_admission.go:38
Jun 23 07:16:49.812: INFO: Waiting up to 2m0s for pod "wrong-pod-os" in namespace "pod-os-rejection-7767" to be "failed with reason PodOSNotSupported"
Jun 23 07:16:49.819: INFO: Pod "wrong-pod-os": Phase="Pending", Reason="", readiness=false. Elapsed: 7.126047ms
Jun 23 07:16:51.832: INFO: Pod "wrong-pod-os": Phase="Failed", Reason="PodOSNotSupported", readiness=false. Elapsed: 2.019804508s
Jun 23 07:16:51.832: INFO: Pod "wrong-pod-os" satisfied condition "failed with reason PodOSNotSupported"
[AfterEach] [sig-node] PodOSRejection [NodeConformance]
  test/e2e/framework/framework.go:187
Jun 23 07:16:51.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-os-rejection-7767" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS","total":-1,"completed":4,"skipped":82,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:51.905: INFO: Only supported for providers [azure] (not gce)
... skipping 24 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:16:46.043: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92ed91dd-878c-4999-ae67-38a1ac05121c" in namespace "downward-api-8285" to be "Succeeded or Failed"
Jun 23 07:16:46.051: INFO: Pod "downwardapi-volume-92ed91dd-878c-4999-ae67-38a1ac05121c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.411274ms
Jun 23 07:16:48.061: INFO: Pod "downwardapi-volume-92ed91dd-878c-4999-ae67-38a1ac05121c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018031171s
Jun 23 07:16:50.057: INFO: Pod "downwardapi-volume-92ed91dd-878c-4999-ae67-38a1ac05121c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013943866s
Jun 23 07:16:52.056: INFO: Pod "downwardapi-volume-92ed91dd-878c-4999-ae67-38a1ac05121c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013588892s
STEP: Saw pod success
Jun 23 07:16:52.056: INFO: Pod "downwardapi-volume-92ed91dd-878c-4999-ae67-38a1ac05121c" satisfied condition "Succeeded or Failed"
Jun 23 07:16:52.060: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod downwardapi-volume-92ed91dd-878c-4999-ae67-38a1ac05121c container client-container: <nil>
STEP: delete the pod
Jun 23 07:16:52.088: INFO: Waiting for pod downwardapi-volume-92ed91dd-878c-4999-ae67-38a1ac05121c to disappear
Jun 23 07:16:52.095: INFO: Pod downwardapi-volume-92ed91dd-878c-4999-ae67-38a1ac05121c no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.102 seconds]
[sig-storage] Downward API volume
test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  test/e2e/common/node/sysctl.go:37
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 12 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:16:52.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-8021" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":9,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 113 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 17 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:16:53.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3607" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":80,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PV Protection
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 25 lines ...
Jun 23 07:16:53.110: INFO: AfterEach: Cleaning up test resources.
Jun 23 07:16:53.110: INFO: pvc is nil
Jun 23 07:16:53.110: INFO: Deleting PersistentVolume "hostpath-b9dx7"

•S
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":7,"skipped":54,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:53.137: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 111 lines ...
  test/e2e/common/node/runtime.go:43
    on terminated container
    test/e2e/common/node/runtime.go:136
      should report termination message if TerminationMessagePath is set [NodeConformance]
      test/e2e/common/node/runtime.go:173
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":7,"skipped":35,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:55.136: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 71 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:16:55.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-9676" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":8,"skipped":46,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-expansion 
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 91 lines ...
test/e2e/storage/utils/framework.go:23
  loopback local block volume
  test/e2e/storage/local_volume_resize.go:45
    should support online expansion on node
    test/e2e/storage/local_volume_resize.go:85
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-expansion  loopback local block volume should support online expansion on node","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:56.177: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 55 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:16:56.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7471" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 95 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:447
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":6,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:56.904: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 125 lines ...
• [SLOW TEST:11.013 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:58.617: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/framework/framework.go:187

... skipping 136 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  test/e2e/common/storage/empty_dir.go:63
STEP: Creating a pod to test emptydir subpath on tmpfs
Jun 23 07:16:53.237: INFO: Waiting up to 5m0s for pod "pod-c8e51d56-6a06-4908-a176-d5b275703f55" in namespace "emptydir-8666" to be "Succeeded or Failed"
Jun 23 07:16:53.254: INFO: Pod "pod-c8e51d56-6a06-4908-a176-d5b275703f55": Phase="Pending", Reason="", readiness=false. Elapsed: 17.777277ms
Jun 23 07:16:55.268: INFO: Pod "pod-c8e51d56-6a06-4908-a176-d5b275703f55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031244788s
Jun 23 07:16:57.270: INFO: Pod "pod-c8e51d56-6a06-4908-a176-d5b275703f55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033074621s
Jun 23 07:16:59.277: INFO: Pod "pod-c8e51d56-6a06-4908-a176-d5b275703f55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040224924s
STEP: Saw pod success
Jun 23 07:16:59.277: INFO: Pod "pod-c8e51d56-6a06-4908-a176-d5b275703f55" satisfied condition "Succeeded or Failed"
Jun 23 07:16:59.317: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-c8e51d56-6a06-4908-a176-d5b275703f55 container test-container: <nil>
STEP: delete the pod
Jun 23 07:16:59.432: INFO: Waiting for pod pod-c8e51d56-6a06-4908-a176-d5b275703f55 to disappear
Jun 23 07:16:59.453: INFO: Pod pod-c8e51d56-6a06-4908-a176-d5b275703f55 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 6 lines ...
test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:48
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    test/e2e/common/storage/empty_dir.go:63
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":8,"skipped":61,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:59.544: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:187

... skipping 172 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      test/e2e/storage/testsuites/volume_expand.go:252
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":1,"skipped":15,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:16:59.920: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 54 lines ...
Jun 23 07:16:46.735: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:46.740: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:46.766: INFO: Unable to read jessie_udp@dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:46.771: INFO: Unable to read jessie_tcp@dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:46.779: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:46.786: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:46.822: INFO: Lookups using dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234 failed for: [wheezy_udp@dns-test-service.dns-2934.svc.cluster.local wheezy_tcp@dns-test-service.dns-2934.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local jessie_udp@dns-test-service.dns-2934.svc.cluster.local jessie_tcp@dns-test-service.dns-2934.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local]

Jun 23 07:16:51.834: INFO: Unable to read wheezy_udp@dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:51.844: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:51.886: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:51.904: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:51.960: INFO: Unable to read jessie_udp@dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:51.984: INFO: Unable to read jessie_tcp@dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:51.995: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:52.007: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:52.039: INFO: Lookups using dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234 failed for: [wheezy_udp@dns-test-service.dns-2934.svc.cluster.local wheezy_tcp@dns-test-service.dns-2934.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local jessie_udp@dns-test-service.dns-2934.svc.cluster.local jessie_tcp@dns-test-service.dns-2934.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local]

Jun 23 07:16:56.828: INFO: Unable to read wheezy_udp@dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:56.835: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:56.842: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:56.851: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:56.902: INFO: Unable to read jessie_udp@dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:56.916: INFO: Unable to read jessie_tcp@dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:56.924: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:56.933: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local from pod dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234: the server could not find the requested resource (get pods dns-test-ba878833-2828-4c6f-a20d-3aa142f96234)
Jun 23 07:16:56.970: INFO: Lookups using dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234 failed for: [wheezy_udp@dns-test-service.dns-2934.svc.cluster.local wheezy_tcp@dns-test-service.dns-2934.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local jessie_udp@dns-test-service.dns-2934.svc.cluster.local jessie_tcp@dns-test-service.dns-2934.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2934.svc.cluster.local]

Jun 23 07:17:01.934: INFO: DNS probes using dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:41.612 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
... skipping 138 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      test/e2e/storage/testsuites/ephemeral.go:196
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":4,"skipped":61,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:02.574: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-83184d2f-440b-4fe8-82f8-f4cc8ca279af
STEP: Creating a pod to test consume secrets
Jun 23 07:16:53.273: INFO: Waiting up to 5m0s for pod "pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d" in namespace "secrets-4127" to be "Succeeded or Failed"
Jun 23 07:16:53.282: INFO: Pod "pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.598741ms
Jun 23 07:16:55.303: INFO: Pod "pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029335918s
Jun 23 07:16:57.292: INFO: Pod "pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01852359s
Jun 23 07:16:59.317: INFO: Pod "pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043735235s
Jun 23 07:17:01.287: INFO: Pod "pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013887787s
Jun 23 07:17:03.298: INFO: Pod "pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024963932s
STEP: Saw pod success
Jun 23 07:17:03.298: INFO: Pod "pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d" satisfied condition "Succeeded or Failed"
Jun 23 07:17:03.302: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:17:03.346: INFO: Waiting for pod pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d to disappear
Jun 23 07:17:03.350: INFO: Pod pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
... skipping 5 lines ...
• [SLOW TEST:10.234 seconds]
[sig-storage] Secrets
test/e2e/common/storage/framework.go:23
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":86,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:14.226 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  test/e2e/network/dns.go:92
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":13,"skipped":128,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:03.771: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/framework/framework.go:187

... skipping 189 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:03.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4745" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":14,"skipped":150,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:20.271 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should manage the lifecycle of a job [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Job should manage the lifecycle of a job [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:05.313: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 74 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl validation
  test/e2e/kubectl/kubectl.go:1033
    should create/apply a valid CR for CRD with validation schema
    test/e2e/kubectl/kubectl.go:1052
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":10,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:05.859: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 64 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-map-fd1b8802-99d0-4e5c-8702-72efcdcf940d
STEP: Creating a pod to test consume configMaps
Jun 23 07:16:56.502: INFO: Waiting up to 5m0s for pod "pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598" in namespace "configmap-2068" to be "Succeeded or Failed"
Jun 23 07:16:56.523: INFO: Pod "pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598": Phase="Pending", Reason="", readiness=false. Elapsed: 20.95765ms
Jun 23 07:16:58.635: INFO: Pod "pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133227522s
Jun 23 07:17:00.541: INFO: Pod "pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039898428s
Jun 23 07:17:02.528: INFO: Pod "pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026660205s
Jun 23 07:17:04.529: INFO: Pod "pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026995032s
Jun 23 07:17:06.534: INFO: Pod "pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.032811646s
STEP: Saw pod success
Jun 23 07:17:06.534: INFO: Pod "pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598" satisfied condition "Succeeded or Failed"
Jun 23 07:17:06.561: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:17:06.613: INFO: Waiting for pod pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598 to disappear
Jun 23 07:17:06.623: INFO: Pod pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:10.287 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:06.693: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 159 lines ...
• [SLOW TEST:30.161 seconds]
[sig-storage] PVC Protection
test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  test/e2e/storage/pvc_protection.go:128
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":9,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:07.071: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/framework/framework.go:187

... skipping 39 lines ...
• [SLOW TEST:52.124 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  test/e2e/apps/job.go:229
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":7,"skipped":12,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:07.494: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 215 lines ...
Jun 23 07:16:11.683: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpath52t6s] to have phase Bound
Jun 23 07:16:11.696: INFO: PersistentVolumeClaim csi-hostpath52t6s found but phase is Pending instead of Bound.
Jun 23 07:16:13.704: INFO: PersistentVolumeClaim csi-hostpath52t6s found but phase is Pending instead of Bound.
Jun 23 07:16:15.708: INFO: PersistentVolumeClaim csi-hostpath52t6s found and phase=Bound (4.025365834s)
STEP: Creating pod pod-subpath-test-dynamicpv-p6qb
STEP: Creating a pod to test subpath
Jun 23 07:16:15.724: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-p6qb" in namespace "provisioning-7028" to be "Succeeded or Failed"
Jun 23 07:16:15.731: INFO: Pod "pod-subpath-test-dynamicpv-p6qb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.8256ms
Jun 23 07:16:17.735: INFO: Pod "pod-subpath-test-dynamicpv-p6qb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010967173s
Jun 23 07:16:19.739: INFO: Pod "pod-subpath-test-dynamicpv-p6qb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015052797s
Jun 23 07:16:21.735: INFO: Pod "pod-subpath-test-dynamicpv-p6qb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010709018s
Jun 23 07:16:23.756: INFO: Pod "pod-subpath-test-dynamicpv-p6qb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032387351s
Jun 23 07:16:25.735: INFO: Pod "pod-subpath-test-dynamicpv-p6qb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.011334014s
... skipping 3 lines ...
Jun 23 07:16:33.742: INFO: Pod "pod-subpath-test-dynamicpv-p6qb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.018030204s
Jun 23 07:16:35.743: INFO: Pod "pod-subpath-test-dynamicpv-p6qb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.019293736s
Jun 23 07:16:37.747: INFO: Pod "pod-subpath-test-dynamicpv-p6qb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.022726764s
Jun 23 07:16:39.736: INFO: Pod "pod-subpath-test-dynamicpv-p6qb": Phase="Pending", Reason="", readiness=false. Elapsed: 24.012246628s
Jun 23 07:16:41.737: INFO: Pod "pod-subpath-test-dynamicpv-p6qb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.013340255s
STEP: Saw pod success
Jun 23 07:16:41.737: INFO: Pod "pod-subpath-test-dynamicpv-p6qb" satisfied condition "Succeeded or Failed"
Jun 23 07:16:41.741: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod pod-subpath-test-dynamicpv-p6qb container test-container-volume-dynamicpv-p6qb: <nil>
STEP: delete the pod
Jun 23 07:16:41.776: INFO: Waiting for pod pod-subpath-test-dynamicpv-p6qb to disappear
Jun 23 07:16:41.791: INFO: Pod pod-subpath-test-dynamicpv-p6qb no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-p6qb
Jun 23 07:16:41.791: INFO: Deleting pod "pod-subpath-test-dynamicpv-p6qb" in namespace "provisioning-7028"
... skipping 63 lines ...
    test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      test/e2e/storage/testsuites/subpath.go:207
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":60,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:07.581: INFO: Only supported for providers [azure] (not gce)
... skipping 40 lines ...
• [SLOW TEST:25.371 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":5,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:07.754: INFO: Only supported for providers [aws] (not gce)
... skipping 164 lines ...
test/e2e/apimachinery/framework.go:23
  should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
  test/e2e/apimachinery/generated_clientset.go:105
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":9,"skipped":62,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:07.878: INFO: Only supported for providers [azure] (not gce)
... skipping 155 lines ...
STEP: Destroying namespace "services-5148" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":8,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:08.040: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 56 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:08.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6438" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":10,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:08.124: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 71 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:08.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-935" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:08.166: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 79 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:08.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6985" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:08.318: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:187

... skipping 150 lines ...
      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":9,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:08.363: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 51 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:09.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-433" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":11,"skipped":52,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:79.111 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:10.702: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/framework/framework.go:187

... skipping 31 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:10.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8203" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":7,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:11.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5946" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":5,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 15 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:11.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1685" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":6,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:11.516: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 64 lines ...
test/e2e/common/node/framework.go:23
  when scheduling a busybox command that always fails in a pod
  test/e2e/common/node/kubelet.go:81
    should have an terminated reason [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:14.337: INFO: Only supported for providers [azure] (not gce)
... skipping 69 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:14.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2715" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-map-ed4b139b-9155-42d1-8bcc-502a3044ba4e
STEP: Creating a pod to test consume configMaps
Jun 23 07:17:03.462: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd" in namespace "projected-6538" to be "Succeeded or Failed"
Jun 23 07:17:03.503: INFO: Pod "pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd": Phase="Pending", Reason="", readiness=false. Elapsed: 40.792786ms
Jun 23 07:17:05.507: INFO: Pod "pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044472321s
Jun 23 07:17:07.532: INFO: Pod "pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069385779s
Jun 23 07:17:09.508: INFO: Pod "pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046085903s
Jun 23 07:17:11.511: INFO: Pod "pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048914664s
Jun 23 07:17:13.518: INFO: Pod "pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.055201939s
Jun 23 07:17:15.508: INFO: Pod "pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.04549785s
Jun 23 07:17:17.507: INFO: Pod "pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.044867859s
STEP: Saw pod success
Jun 23 07:17:17.507: INFO: Pod "pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd" satisfied condition "Succeeded or Failed"
Jun 23 07:17:17.510: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:17:17.537: INFO: Waiting for pod pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd to disappear
Jun 23 07:17:17.544: INFO: Pod pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:14.152 seconds]
[sig-storage] Projected configMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":87,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:17.572: INFO: Only supported for providers [azure] (not gce)
... skipping 90 lines ...
• [SLOW TEST:6.129 seconds]
[sig-node] Events
test/e2e/node/framework.go:23
  should be sent by kubelets and the scheduler about pods scheduling and running 
  test/e2e/node/events.go:41
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running ","total":-1,"completed":7,"skipped":28,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:17.749: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Jun 23 07:17:17.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  test/e2e/storage/volume_provisioning.go:146
[It] should report an error and create no PV
  test/e2e/storage/volume_provisioning.go:743
Jun 23 07:17:17.828: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [sig-storage] Dynamic Provisioning
  test/e2e/framework/framework.go:187
Jun 23 07:17:17.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-825" for this suite.


S [SKIPPING] [0.049 seconds]
[sig-storage] Dynamic Provisioning
test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  test/e2e/storage/volume_provisioning.go:742
    should report an error and create no PV [It]
    test/e2e/storage/volume_provisioning.go:743

    Only supported for providers [aws] (not gce)

    test/e2e/storage/volume_provisioning.go:744
------------------------------
... skipping 65 lines ...
      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] version v1
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:17:07.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 50 lines ...
test/e2e/network/common/framework.go:23
  version v1
  test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service Proxy [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:19.987: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 187 lines ...
• [SLOW TEST:13.178 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":6,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:21.126: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 162 lines ...
test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  test/e2e/storage/csi_mock_volume.go:750
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    test/e2e/storage/csi_mock_volume.go:765
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":2,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:22.159: INFO: Only supported for providers [azure] (not gce)
... skipping 66 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:22.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7063" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":8,"skipped":52,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 85 lines ...
  test/e2e/kubectl/portforward.go:454
    that expects NO client request
    test/e2e/kubectl/portforward.go:464
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portforward.go:465
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":5,"skipped":62,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:24.771: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 240 lines ...
Jun 23 07:15:54.301: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 07:15:56.301: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007865109s
Jun 23 07:15:56.301: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 07:15:58.303: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.009395025s
Jun 23 07:15:58.303: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 23 07:15:58.303: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 23 07:15:58.303: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=services-4894 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.108.82:80 && echo service-down-failed'
Jun 23 07:16:00.511: INFO: rc: 28
Jun 23 07:16:00.511: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.68.108.82:80 && echo service-down-failed" in pod services-4894/verify-service-down-host-exec-pod: error running /logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=services-4894 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.108.82:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.68.108.82:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4894
STEP: verifying service up-down-2 is still up
Jun 23 07:16:00.522: INFO: Creating new host exec pod
Jun 23 07:16:00.529: INFO: Waiting up to 5m0s for pod "verify-service-up-host-exec-pod" in namespace "services-4894" to be "running and ready"
... skipping 118 lines ...
• [SLOW TEST:169.184 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to up and down services
  test/e2e/network/service.go:1045
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":1,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/downwardapi_volume.go:93
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:17:07.828: INFO: Waiting up to 5m0s for pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56" in namespace "downward-api-1602" to be "Succeeded or Failed"
Jun 23 07:17:07.863: INFO: Pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56": Phase="Pending", Reason="", readiness=false. Elapsed: 35.746989ms
Jun 23 07:17:09.869: INFO: Pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041768391s
Jun 23 07:17:11.874: INFO: Pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046707328s
Jun 23 07:17:13.872: INFO: Pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04475497s
Jun 23 07:17:15.871: INFO: Pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043569414s
Jun 23 07:17:17.869: INFO: Pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56": Phase="Pending", Reason="", readiness=false. Elapsed: 10.041026996s
Jun 23 07:17:19.875: INFO: Pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56": Phase="Pending", Reason="", readiness=false. Elapsed: 12.047560975s
Jun 23 07:17:21.869: INFO: Pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56": Phase="Pending", Reason="", readiness=false. Elapsed: 14.041630397s
Jun 23 07:17:23.869: INFO: Pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56": Phase="Pending", Reason="", readiness=false. Elapsed: 16.041026071s
Jun 23 07:17:25.870: INFO: Pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.042358839s
STEP: Saw pod success
Jun 23 07:17:25.870: INFO: Pod "metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56" satisfied condition "Succeeded or Failed"
Jun 23 07:17:25.874: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56 container client-container: <nil>
STEP: delete the pod
Jun 23 07:17:25.896: INFO: Waiting for pod metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56 to disappear
Jun 23 07:17:25.904: INFO: Pod metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:18.345 seconds]
[sig-storage] Downward API volume
test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/downwardapi_volume.go:93
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":70,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 13 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:26.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7270" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":7,"skipped":76,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:26.234: INFO: Only supported for providers [vsphere] (not gce)
... skipping 62 lines ...
test/e2e/common/node/framework.go:23
  NodeLease
  test/e2e/common/node/node_lease.go:51
    the kubelet should report node status infrequently
    test/e2e/common/node/node_lease.go:114
------------------------------
{"msg":"PASSED [sig-node] NodeLease NodeLease the kubelet should report node status infrequently","total":-1,"completed":11,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:26.255: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 79 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:26.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "protocol-5473" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":8,"skipped":99,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 111 lines ...
test/e2e/storage/utils/framework.go:23
  storage capacity
  test/e2e/storage/csi_mock_volume.go:1100
    unlimited
    test/e2e/storage/csi_mock_volume.go:1158
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":3,"skipped":34,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:26.471: INFO: Only supported for providers [vsphere] (not gce)
... skipping 139 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Only supported for providers [vsphere] (not gce)

      test/e2e/storage/drivers/in_tree.go:1439
------------------------------
... skipping 4 lines ...
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test service account token: 
Jun 23 07:17:08.448: INFO: Waiting up to 5m0s for pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b" in namespace "svcaccounts-4890" to be "Succeeded or Failed"
Jun 23 07:17:08.456: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032543ms
Jun 23 07:17:10.465: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017713303s
Jun 23 07:17:12.460: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012534673s
Jun 23 07:17:14.475: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026761186s
Jun 23 07:17:16.460: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012332581s
Jun 23 07:17:18.461: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0132961s
Jun 23 07:17:20.471: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022868206s
Jun 23 07:17:22.462: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.014136804s
Jun 23 07:17:24.461: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.013347614s
Jun 23 07:17:26.466: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.01773408s
Jun 23 07:17:28.461: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.013078504s
STEP: Saw pod success
Jun 23 07:17:28.461: INFO: Pod "test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b" satisfied condition "Succeeded or Failed"
Jun 23 07:17:28.465: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:17:28.485: INFO: Waiting for pod test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b to disappear
Jun 23 07:17:28.488: INFO: Pod test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:20.112 seconds]
[sig-auth] ServiceAccounts
test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":8,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:28.523: INFO: Driver local doesn't support ext3 -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jun 23 07:17:24.908: INFO: Waiting up to 5m0s for pod "downward-api-95c79c3e-186a-46f7-8ad3-734cf2997a93" in namespace "downward-api-813" to be "Succeeded or Failed"
Jun 23 07:17:24.921: INFO: Pod "downward-api-95c79c3e-186a-46f7-8ad3-734cf2997a93": Phase="Pending", Reason="", readiness=false. Elapsed: 13.273297ms
Jun 23 07:17:26.926: INFO: Pod "downward-api-95c79c3e-186a-46f7-8ad3-734cf2997a93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018256315s
Jun 23 07:17:28.927: INFO: Pod "downward-api-95c79c3e-186a-46f7-8ad3-734cf2997a93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019382128s
Jun 23 07:17:30.936: INFO: Pod "downward-api-95c79c3e-186a-46f7-8ad3-734cf2997a93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027962618s
STEP: Saw pod success
Jun 23 07:17:30.936: INFO: Pod "downward-api-95c79c3e-186a-46f7-8ad3-734cf2997a93" satisfied condition "Succeeded or Failed"
Jun 23 07:17:30.943: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod downward-api-95c79c3e-186a-46f7-8ad3-734cf2997a93 container dapi-container: <nil>
STEP: delete the pod
Jun 23 07:17:31.011: INFO: Waiting for pod downward-api-95c79c3e-186a-46f7-8ad3-734cf2997a93 to disappear
Jun 23 07:17:31.020: INFO: Pod downward-api-95c79c3e-186a-46f7-8ad3-734cf2997a93 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.185 seconds]
[sig-node] Downward API
test/e2e/common/node/framework.go:23
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":78,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:17:20.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 40 lines ...
• [SLOW TEST:10.303 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:31.168: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/framework/framework.go:187

... skipping 57 lines ...
      Only supported for providers [vsphere] (not gce)

      test/e2e/storage/drivers/in_tree.go:1439
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":10,"skipped":48,"failed":0}
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:17:24.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
• [SLOW TEST:7.645 seconds]
[sig-node] InitContainer [NodeConformance]
test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":11,"skipped":48,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Downward API
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:17:20.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jun 23 07:17:20.056: INFO: Waiting up to 5m0s for pod "downward-api-c485ab9f-e604-45d7-a335-04f07be328aa" in namespace "downward-api-4818" to be "Succeeded or Failed"
Jun 23 07:17:20.061: INFO: Pod "downward-api-c485ab9f-e604-45d7-a335-04f07be328aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218131ms
Jun 23 07:17:22.065: INFO: Pod "downward-api-c485ab9f-e604-45d7-a335-04f07be328aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008214747s
Jun 23 07:17:24.066: INFO: Pod "downward-api-c485ab9f-e604-45d7-a335-04f07be328aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010010638s
Jun 23 07:17:26.068: INFO: Pod "downward-api-c485ab9f-e604-45d7-a335-04f07be328aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011811879s
Jun 23 07:17:28.065: INFO: Pod "downward-api-c485ab9f-e604-45d7-a335-04f07be328aa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008646695s
Jun 23 07:17:30.068: INFO: Pod "downward-api-c485ab9f-e604-45d7-a335-04f07be328aa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.011235318s
Jun 23 07:17:32.083: INFO: Pod "downward-api-c485ab9f-e604-45d7-a335-04f07be328aa": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026736491s
Jun 23 07:17:34.082: INFO: Pod "downward-api-c485ab9f-e604-45d7-a335-04f07be328aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.025559848s
STEP: Saw pod success
Jun 23 07:17:34.082: INFO: Pod "downward-api-c485ab9f-e604-45d7-a335-04f07be328aa" satisfied condition "Succeeded or Failed"
Jun 23 07:17:34.094: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod downward-api-c485ab9f-e604-45d7-a335-04f07be328aa container dapi-container: <nil>
STEP: delete the pod
Jun 23 07:17:34.183: INFO: Waiting for pod downward-api-c485ab9f-e604-45d7-a335-04f07be328aa to disappear
Jun 23 07:17:34.218: INFO: Pod downward-api-c485ab9f-e604-45d7-a335-04f07be328aa no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:14.230 seconds]
[sig-node] Downward API
test/e2e/common/node/framework.go:23
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:34.265: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/framework/framework.go:187

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap configmap-3868/configmap-test-8d32f51d-bb49-4b0f-9091-91c96574d569
STEP: Creating a pod to test consume configMaps
Jun 23 07:17:26.505: INFO: Waiting up to 5m0s for pod "pod-configmaps-7bada886-2e62-46e6-a3fd-d53307ce9098" in namespace "configmap-3868" to be "Succeeded or Failed"
Jun 23 07:17:26.523: INFO: Pod "pod-configmaps-7bada886-2e62-46e6-a3fd-d53307ce9098": Phase="Pending", Reason="", readiness=false. Elapsed: 17.143973ms
Jun 23 07:17:28.527: INFO: Pod "pod-configmaps-7bada886-2e62-46e6-a3fd-d53307ce9098": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021960013s
Jun 23 07:17:30.531: INFO: Pod "pod-configmaps-7bada886-2e62-46e6-a3fd-d53307ce9098": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025981422s
Jun 23 07:17:32.526: INFO: Pod "pod-configmaps-7bada886-2e62-46e6-a3fd-d53307ce9098": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021056353s
Jun 23 07:17:34.532: INFO: Pod "pod-configmaps-7bada886-2e62-46e6-a3fd-d53307ce9098": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.026769475s
STEP: Saw pod success
Jun 23 07:17:34.532: INFO: Pod "pod-configmaps-7bada886-2e62-46e6-a3fd-d53307ce9098" satisfied condition "Succeeded or Failed"
Jun 23 07:17:34.538: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-configmaps-7bada886-2e62-46e6-a3fd-d53307ce9098 container env-test: <nil>
STEP: delete the pod
Jun 23 07:17:34.567: INFO: Waiting for pod pod-configmaps-7bada886-2e62-46e6-a3fd-d53307ce9098 to disappear
Jun 23 07:17:34.582: INFO: Pod pod-configmaps-7bada886-2e62-46e6-a3fd-d53307ce9098 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.173 seconds]
[sig-node] ConfigMap
test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":102,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:34.644: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 162 lines ...
test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  test/e2e/common/network/networking.go:32
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 62 lines ...
test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  test/e2e/common/node/lifecycle_hook.go:46
    should execute poststart http hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":151,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:36.172: INFO: Only supported for providers [aws] (not gce)
... skipping 286 lines ...
Jun 23 07:17:24.053: INFO: PersistentVolumeClaim pvc-qsjz4 found but phase is Pending instead of Bound.
Jun 23 07:17:26.064: INFO: PersistentVolumeClaim pvc-qsjz4 found and phase=Bound (12.047782137s)
Jun 23 07:17:26.064: INFO: Waiting up to 3m0s for PersistentVolume local-2729w to have phase Bound
Jun 23 07:17:26.077: INFO: PersistentVolume local-2729w found and phase=Bound (12.455718ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-c6kp
STEP: Creating a pod to test subpath
Jun 23 07:17:26.090: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c6kp" in namespace "provisioning-7447" to be "Succeeded or Failed"
Jun 23 07:17:26.103: INFO: Pod "pod-subpath-test-preprovisionedpv-c6kp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.380813ms
Jun 23 07:17:28.108: INFO: Pod "pod-subpath-test-preprovisionedpv-c6kp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017554012s
Jun 23 07:17:30.108: INFO: Pod "pod-subpath-test-preprovisionedpv-c6kp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017363024s
Jun 23 07:17:32.114: INFO: Pod "pod-subpath-test-preprovisionedpv-c6kp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02392257s
Jun 23 07:17:34.110: INFO: Pod "pod-subpath-test-preprovisionedpv-c6kp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01961442s
Jun 23 07:17:36.107: INFO: Pod "pod-subpath-test-preprovisionedpv-c6kp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016516663s
STEP: Saw pod success
Jun 23 07:17:36.107: INFO: Pod "pod-subpath-test-preprovisionedpv-c6kp" satisfied condition "Succeeded or Failed"
Jun 23 07:17:36.111: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod pod-subpath-test-preprovisionedpv-c6kp container test-container-subpath-preprovisionedpv-c6kp: <nil>
STEP: delete the pod
Jun 23 07:17:36.136: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c6kp to disappear
Jun 23 07:17:36.142: INFO: Pod pod-subpath-test-preprovisionedpv-c6kp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-c6kp
Jun 23 07:17:36.142: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c6kp" in namespace "provisioning-7447"
... skipping 34 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:367
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":39,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 11 lines ...
STEP: Destroying namespace "apply-4260" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  test/e2e/apimachinery/apply.go:59

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":8,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:37.029: INFO: Only supported for providers [aws] (not gce)
... skipping 52 lines ...
Jun 23 07:17:25.468: INFO: PersistentVolumeClaim pvc-kr8zw found but phase is Pending instead of Bound.
Jun 23 07:17:27.487: INFO: PersistentVolumeClaim pvc-kr8zw found and phase=Bound (12.063313345s)
Jun 23 07:17:27.487: INFO: Waiting up to 3m0s for PersistentVolume local-fvkdh to have phase Bound
Jun 23 07:17:27.493: INFO: PersistentVolume local-fvkdh found and phase=Bound (5.874201ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ktqb
STEP: Creating a pod to test subpath
Jun 23 07:17:27.507: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ktqb" in namespace "provisioning-599" to be "Succeeded or Failed"
Jun 23 07:17:27.513: INFO: Pod "pod-subpath-test-preprovisionedpv-ktqb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.790028ms
Jun 23 07:17:29.519: INFO: Pod "pod-subpath-test-preprovisionedpv-ktqb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012420723s
Jun 23 07:17:31.531: INFO: Pod "pod-subpath-test-preprovisionedpv-ktqb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024677816s
Jun 23 07:17:33.531: INFO: Pod "pod-subpath-test-preprovisionedpv-ktqb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024580523s
Jun 23 07:17:35.520: INFO: Pod "pod-subpath-test-preprovisionedpv-ktqb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012921308s
Jun 23 07:17:37.520: INFO: Pod "pod-subpath-test-preprovisionedpv-ktqb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.013842157s
Jun 23 07:17:39.520: INFO: Pod "pod-subpath-test-preprovisionedpv-ktqb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.01373109s
STEP: Saw pod success
Jun 23 07:17:39.520: INFO: Pod "pod-subpath-test-preprovisionedpv-ktqb" satisfied condition "Succeeded or Failed"
Jun 23 07:17:39.527: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod pod-subpath-test-preprovisionedpv-ktqb container test-container-volume-preprovisionedpv-ktqb: <nil>
STEP: delete the pod
Jun 23 07:17:39.559: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ktqb to disappear
Jun 23 07:17:39.564: INFO: Pod pod-subpath-test-preprovisionedpv-ktqb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ktqb
Jun 23 07:17:39.564: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ktqb" in namespace "provisioning-599"
... skipping 21 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      test/e2e/storage/testsuites/subpath.go:207
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":12,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:39.889: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver emptydir doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 119 lines ...
• [SLOW TEST:19.229 seconds]
[sig-node] PreStop
test/e2e/node/framework.go:23
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":7,"skipped":74,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:40.507: INFO: Only supported for providers [openstack] (not gce)
... skipping 33 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:17:40.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2092" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":13,"skipped":68,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Lease
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 40 lines ...
• [SLOW TEST:26.337 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should be restarted with a local redirect http liveness probe
  test/e2e/common/node/container_probe.go:278
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":5,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:40.957: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test override arguments
Jun 23 07:17:32.376: INFO: Waiting up to 5m0s for pod "client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe" in namespace "containers-6707" to be "Succeeded or Failed"
Jun 23 07:17:32.387: INFO: Pod "client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.23147ms
Jun 23 07:17:34.396: INFO: Pod "client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019844413s
Jun 23 07:17:36.404: INFO: Pod "client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027426903s
Jun 23 07:17:38.395: INFO: Pod "client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018735348s
Jun 23 07:17:40.408: INFO: Pod "client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03155207s
Jun 23 07:17:42.408: INFO: Pod "client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031929609s
Jun 23 07:17:44.390: INFO: Pod "client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.014199245s
STEP: Saw pod success
Jun 23 07:17:44.391: INFO: Pod "client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe" satisfied condition "Succeeded or Failed"
Jun 23 07:17:44.396: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:17:44.428: INFO: Waiting for pod client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe to disappear
Jun 23 07:17:44.431: INFO: Pod client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe no longer exists
[AfterEach] [sig-node] Containers
  test/e2e/framework/framework.go:187
... skipping 28 lines ...
• [SLOW TEST:13.240 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":51,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:44.473: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 220 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
    test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      test/e2e/storage/testsuites/ephemeral.go:315
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":5,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:49.100: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 125 lines ...
• [SLOW TEST:92.463 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should be restarted with a GRPC liveness probe [NodeConformance]
  test/e2e/common/node/container_probe.go:543
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]","total":-1,"completed":8,"skipped":133,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:49.924: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 90 lines ...
STEP: Creating pod
Jun 23 07:17:13.954: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-cj6m7" in namespace "csi-mock-volumes-874" to be "running"
Jun 23 07:17:13.963: INFO: Pod "pvc-volume-tester-cj6m7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512814ms
Jun 23 07:17:15.971: INFO: Pod "pvc-volume-tester-cj6m7": Phase="Running", Reason="", readiness=true. Elapsed: 2.016519846s
Jun 23 07:17:15.971: INFO: Pod "pvc-volume-tester-cj6m7" satisfied condition "running"
STEP: checking for CSIInlineVolumes feature
Jun 23 07:17:16.006: INFO: Error getting logs for pod inline-volume-g8kv8: the server rejected our request for an unknown reason (get pods inline-volume-g8kv8)
Jun 23 07:17:16.011: INFO: Deleting pod "inline-volume-g8kv8" in namespace "csi-mock-volumes-874"
Jun 23 07:17:16.019: INFO: Wait up to 5m0s for pod "inline-volume-g8kv8" to be fully deleted
STEP: Deleting the previously created pod
Jun 23 07:17:24.048: INFO: Deleting pod "pvc-volume-tester-cj6m7" in namespace "csi-mock-volumes-874"
Jun 23 07:17:24.057: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cj6m7" to be fully deleted
STEP: Checking CSI driver logs
Jun 23 07:17:34.108: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Jun 23 07:17:34.109: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-cj6m7
Jun 23 07:17:34.109: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-874
Jun 23 07:17:34.109: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: eff7ea4e-6def-4c15-9c38-0cf5d46627d5
Jun 23 07:17:34.109: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jun 23 07:17:34.109: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-183ecccad4443cc959734922d3157557f8ea0dd2e3c10fd2658426c973880cdd","target_path":"/var/lib/kubelet/pods/eff7ea4e-6def-4c15-9c38-0cf5d46627d5/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-cj6m7
Jun 23 07:17:34.109: INFO: Deleting pod "pvc-volume-tester-cj6m7" in namespace "csi-mock-volumes-874"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-874
STEP: Waiting for namespaces [csi-mock-volumes-874] to vanish
STEP: uninstalling csi mock driver
... skipping 40 lines ...
test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  test/e2e/storage/csi_mock_volume.go:467
    contain ephemeral=true when using inline volume
    test/e2e/storage/csi_mock_volume.go:517
------------------------------
{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":8,"skipped":91,"failed":0}
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:17:40.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 71 lines ...
• [SLOW TEST:12.315 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  should validate Deployment Status endpoints [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":9,"skipped":91,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:53.193: INFO: Only supported for providers [vsphere] (not gce)
... skipping 24 lines ...
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/auth/service_accounts.go:333
STEP: Creating a pod to test service account token: 
Jun 23 07:17:22.240: INFO: Waiting up to 5m0s for pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e" in namespace "svcaccounts-3882" to be "Succeeded or Failed"
Jun 23 07:17:22.244: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288519ms
Jun 23 07:17:24.249: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008773946s
Jun 23 07:17:26.249: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009442391s
Jun 23 07:17:28.252: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011972425s
Jun 23 07:17:30.251: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.011302834s
STEP: Saw pod success
Jun 23 07:17:30.251: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e" satisfied condition "Succeeded or Failed"
Jun 23 07:17:30.255: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:17:30.277: INFO: Waiting for pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e to disappear
Jun 23 07:17:30.284: INFO: Pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e no longer exists
STEP: Creating a pod to test service account token: 
Jun 23 07:17:30.291: INFO: Waiting up to 5m0s for pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e" in namespace "svcaccounts-3882" to be "Succeeded or Failed"
Jun 23 07:17:30.296: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.394938ms
Jun 23 07:17:32.315: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024087356s
Jun 23 07:17:34.304: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013493477s
Jun 23 07:17:36.302: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011582731s
Jun 23 07:17:38.305: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01477351s
STEP: Saw pod success
Jun 23 07:17:38.306: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e" satisfied condition "Succeeded or Failed"
Jun 23 07:17:38.312: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:17:38.329: INFO: Waiting for pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e to disappear
Jun 23 07:17:38.333: INFO: Pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e no longer exists
STEP: Creating a pod to test service account token: 
Jun 23 07:17:38.341: INFO: Waiting up to 5m0s for pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e" in namespace "svcaccounts-3882" to be "Succeeded or Failed"
Jun 23 07:17:38.349: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.984276ms
Jun 23 07:17:40.357: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01583777s
Jun 23 07:17:42.363: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022183791s
Jun 23 07:17:44.356: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015446943s
Jun 23 07:17:46.354: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012562504s
Jun 23 07:17:48.354: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.013409752s
STEP: Saw pod success
Jun 23 07:17:48.355: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e" satisfied condition "Succeeded or Failed"
Jun 23 07:17:48.358: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:17:48.378: INFO: Waiting for pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e to disappear
Jun 23 07:17:48.383: INFO: Pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e no longer exists
STEP: Creating a pod to test service account token: 
Jun 23 07:17:48.388: INFO: Waiting up to 5m0s for pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e" in namespace "svcaccounts-3882" to be "Succeeded or Failed"
Jun 23 07:17:48.401: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.949136ms
Jun 23 07:17:50.405: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01697364s
Jun 23 07:17:52.405: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016687277s
Jun 23 07:17:54.414: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026230993s
STEP: Saw pod success
Jun 23 07:17:54.414: INFO: Pod "test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e" satisfied condition "Succeeded or Failed"
Jun 23 07:17:54.418: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:17:54.452: INFO: Waiting for pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e to disappear
Jun 23 07:17:54.456: INFO: Pod test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:32.282 seconds]
[sig-auth] ServiceAccounts
test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/auth/service_accounts.go:333
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:60.410 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  test/e2e/common/node/container_probe.go:227
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":9,"skipped":50,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:17:49.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun 23 07:17:50.053: INFO: Waiting up to 5m0s for pod "pod-1cfff29e-66c4-4980-80c4-cc4f01cf1635" in namespace "emptydir-2956" to be "Succeeded or Failed"
Jun 23 07:17:50.057: INFO: Pod "pod-1cfff29e-66c4-4980-80c4-cc4f01cf1635": Phase="Pending", Reason="", readiness=false. Elapsed: 3.884019ms
Jun 23 07:17:52.061: INFO: Pod "pod-1cfff29e-66c4-4980-80c4-cc4f01cf1635": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007981573s
Jun 23 07:17:54.061: INFO: Pod "pod-1cfff29e-66c4-4980-80c4-cc4f01cf1635": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007916228s
Jun 23 07:17:56.061: INFO: Pod "pod-1cfff29e-66c4-4980-80c4-cc4f01cf1635": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007797348s
Jun 23 07:17:58.062: INFO: Pod "pod-1cfff29e-66c4-4980-80c4-cc4f01cf1635": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.008190184s
STEP: Saw pod success
Jun 23 07:17:58.062: INFO: Pod "pod-1cfff29e-66c4-4980-80c4-cc4f01cf1635" satisfied condition "Succeeded or Failed"
Jun 23 07:17:58.066: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-1cfff29e-66c4-4980-80c4-cc4f01cf1635 container test-container: <nil>
STEP: delete the pod
Jun 23 07:17:58.086: INFO: Waiting for pod pod-1cfff29e-66c4-4980-80c4-cc4f01cf1635 to disappear
Jun 23 07:17:58.090: INFO: Pod pod-1cfff29e-66c4-4980-80c4-cc4f01cf1635 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.137 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":145,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 52 lines ...
  test/e2e/kubectl/portforward.go:454
    that expects a client request
    test/e2e/kubectl/portforward.go:455
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portforward.go:459
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":16,"skipped":194,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:17:58.618: INFO: Only supported for providers [azure] (not gce)
... skipping 52 lines ...
[It] should support non-existent path
  test/e2e/storage/testsuites/subpath.go:196
Jun 23 07:17:54.521: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 23 07:17:54.521: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-46jr
STEP: Creating a pod to test subpath
Jun 23 07:17:54.533: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-46jr" in namespace "provisioning-2574" to be "Succeeded or Failed"
Jun 23 07:17:54.542: INFO: Pod "pod-subpath-test-inlinevolume-46jr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199677ms
Jun 23 07:17:56.549: INFO: Pod "pod-subpath-test-inlinevolume-46jr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015720557s
Jun 23 07:17:58.548: INFO: Pod "pod-subpath-test-inlinevolume-46jr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01449728s
Jun 23 07:18:00.548: INFO: Pod "pod-subpath-test-inlinevolume-46jr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014780692s
STEP: Saw pod success
Jun 23 07:18:00.548: INFO: Pod "pod-subpath-test-inlinevolume-46jr" satisfied condition "Succeeded or Failed"
Jun 23 07:18:00.553: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-subpath-test-inlinevolume-46jr container test-container-volume-inlinevolume-46jr: <nil>
STEP: delete the pod
Jun 23 07:18:00.577: INFO: Waiting for pod pod-subpath-test-inlinevolume-46jr to disappear
Jun 23 07:18:00.580: INFO: Pod pod-subpath-test-inlinevolume-46jr no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-46jr
Jun 23 07:18:00.580: INFO: Deleting pod "pod-subpath-test-inlinevolume-46jr" in namespace "provisioning-2574"
... skipping 12 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":24,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 209 lines ...
• [SLOW TEST:69.555 seconds]
[sig-network] SCTP [LinuxOnly]
test/e2e/network/common/framework.go:23
  should create a ClusterIP Service with SCTP ports
  test/e2e/network/service.go:4178
------------------------------
{"msg":"PASSED [sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports","total":-1,"completed":5,"skipped":84,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:01.501: INFO: Only supported for providers [azure] (not gce)
... skipping 37 lines ...
      Driver local doesn't support GenericEphemeralVolume -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":9,"skipped":68,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:17:44.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
  test/e2e/common/node/runtime.go:43
    when running a container with a new image
    test/e2e/common/node/runtime.go:259
      should be able to pull image [NodeConformance]
      test/e2e/common/node/runtime.go:375
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":10,"skipped":68,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:01.920: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 196 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:18:02.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4926" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":11,"skipped":83,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:02.242: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27 lines ...
[It] should support existing single file [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:221
Jun 23 07:17:44.565: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 23 07:17:44.565: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-xsxw
STEP: Creating a pod to test subpath
Jun 23 07:17:44.576: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-xsxw" in namespace "provisioning-6834" to be "Succeeded or Failed"
Jun 23 07:17:44.581: INFO: Pod "pod-subpath-test-inlinevolume-xsxw": Phase="Pending", Reason="", readiness=false. Elapsed: 5.388537ms
Jun 23 07:17:46.588: INFO: Pod "pod-subpath-test-inlinevolume-xsxw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01238016s
Jun 23 07:17:48.588: INFO: Pod "pod-subpath-test-inlinevolume-xsxw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011855278s
Jun 23 07:17:50.585: INFO: Pod "pod-subpath-test-inlinevolume-xsxw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008883691s
Jun 23 07:17:52.586: INFO: Pod "pod-subpath-test-inlinevolume-xsxw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010390546s
Jun 23 07:17:54.588: INFO: Pod "pod-subpath-test-inlinevolume-xsxw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.012195688s
Jun 23 07:17:56.589: INFO: Pod "pod-subpath-test-inlinevolume-xsxw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.013347955s
Jun 23 07:17:58.586: INFO: Pod "pod-subpath-test-inlinevolume-xsxw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.010405197s
Jun 23 07:18:00.587: INFO: Pod "pod-subpath-test-inlinevolume-xsxw": Phase="Pending", Reason="", readiness=false. Elapsed: 16.010725484s
Jun 23 07:18:02.598: INFO: Pod "pod-subpath-test-inlinevolume-xsxw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.021789033s
STEP: Saw pod success
Jun 23 07:18:02.598: INFO: Pod "pod-subpath-test-inlinevolume-xsxw" satisfied condition "Succeeded or Failed"
Jun 23 07:18:02.617: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod pod-subpath-test-inlinevolume-xsxw container test-container-subpath-inlinevolume-xsxw: <nil>
STEP: delete the pod
Jun 23 07:18:02.712: INFO: Waiting for pod pod-subpath-test-inlinevolume-xsxw to disappear
Jun 23 07:18:02.744: INFO: Pod pod-subpath-test-inlinevolume-xsxw no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-xsxw
Jun 23 07:18:02.745: INFO: Deleting pod "pod-subpath-test-inlinevolume-xsxw" in namespace "provisioning-6834"
... skipping 50 lines ...
• [SLOW TEST:8.116 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:04.262: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 11 lines ...
[It] should support existing directory
  test/e2e/storage/testsuites/subpath.go:207
Jun 23 07:17:58.221: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 07:17:58.225: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-kvxh
STEP: Creating a pod to test subpath
Jun 23 07:17:58.240: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kvxh" in namespace "provisioning-1394" to be "Succeeded or Failed"
Jun 23 07:17:58.248: INFO: Pod "pod-subpath-test-inlinevolume-kvxh": Phase="Pending", Reason="", readiness=false. Elapsed: 7.227932ms
Jun 23 07:18:00.253: INFO: Pod "pod-subpath-test-inlinevolume-kvxh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012823445s
Jun 23 07:18:02.253: INFO: Pod "pod-subpath-test-inlinevolume-kvxh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012150234s
Jun 23 07:18:04.253: INFO: Pod "pod-subpath-test-inlinevolume-kvxh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012238212s
STEP: Saw pod success
Jun 23 07:18:04.253: INFO: Pod "pod-subpath-test-inlinevolume-kvxh" satisfied condition "Succeeded or Failed"
Jun 23 07:18:04.257: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod pod-subpath-test-inlinevolume-kvxh container test-container-volume-inlinevolume-kvxh: <nil>
STEP: delete the pod
Jun 23 07:18:04.280: INFO: Waiting for pod pod-subpath-test-inlinevolume-kvxh to disappear
Jun 23 07:18:04.283: INFO: Pod pod-subpath-test-inlinevolume-kvxh no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-kvxh
Jun 23 07:18:04.283: INFO: Deleting pod "pod-subpath-test-inlinevolume-kvxh" in namespace "provisioning-1394"
... skipping 12 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      test/e2e/storage/testsuites/subpath.go:207
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":10,"skipped":152,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:04.341: INFO: Only supported for providers [openstack] (not gce)
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-910a2d2e-0459-476d-8b22-c75ebc2380a7
STEP: Creating a pod to test consume secrets
Jun 23 07:18:01.597: INFO: Waiting up to 5m0s for pod "pod-secrets-27b95091-badb-4fca-90e9-74e768c3e75a" in namespace "secrets-5531" to be "Succeeded or Failed"
Jun 23 07:18:01.601: INFO: Pod "pod-secrets-27b95091-badb-4fca-90e9-74e768c3e75a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.243075ms
Jun 23 07:18:03.610: INFO: Pod "pod-secrets-27b95091-badb-4fca-90e9-74e768c3e75a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01205699s
Jun 23 07:18:05.610: INFO: Pod "pod-secrets-27b95091-badb-4fca-90e9-74e768c3e75a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012761112s
STEP: Saw pod success
Jun 23 07:18:05.610: INFO: Pod "pod-secrets-27b95091-badb-4fca-90e9-74e768c3e75a" satisfied condition "Succeeded or Failed"
Jun 23 07:18:05.628: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-secrets-27b95091-badb-4fca-90e9-74e768c3e75a container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:18:05.653: INFO: Waiting for pod pod-secrets-27b95091-badb-4fca-90e9-74e768c3e75a to disappear
Jun 23 07:18:05.659: INFO: Pod pod-secrets-27b95091-badb-4fca-90e9-74e768c3e75a no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
Jun 23 07:18:05.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5531" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":92,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:05.695: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 11 lines ...
      Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":7,"skipped":56,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:17:52.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
Jun 23 07:17:52.700: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-5650 create -f -'
Jun 23 07:17:53.553: INFO: stderr: ""
Jun 23 07:17:53.553: INFO: stdout: "pod/busybox1 created\n"
Jun 23 07:17:53.553: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [busybox1]
Jun 23 07:17:53.553: INFO: Waiting up to 5m0s for pod "busybox1" in namespace "kubectl-5650" to be "running and ready"
Jun 23 07:17:53.559: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.545707ms
Jun 23 07:17:53.559: INFO: Error evaluating pod condition running and ready: want pod 'busybox1' on 'nodes-us-central1-a-nk1s' to be 'Running' but was 'Pending'
Jun 23 07:17:55.566: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012467612s
Jun 23 07:17:55.566: INFO: Error evaluating pod condition running and ready: want pod 'busybox1' on 'nodes-us-central1-a-nk1s' to be 'Running' but was 'Pending'
Jun 23 07:17:57.565: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011753493s
Jun 23 07:17:57.565: INFO: Error evaluating pod condition running and ready: want pod 'busybox1' on 'nodes-us-central1-a-nk1s' to be 'Running' but was 'Pending'
Jun 23 07:17:59.583: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030225661s
Jun 23 07:17:59.584: INFO: Error evaluating pod condition running and ready: want pod 'busybox1' on 'nodes-us-central1-a-nk1s' to be 'Running' but was 'Pending'
Jun 23 07:18:01.564: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011068861s
Jun 23 07:18:01.564: INFO: Error evaluating pod condition running and ready: want pod 'busybox1' on 'nodes-us-central1-a-nk1s' to be 'Running' but was 'Pending'
Jun 23 07:18:03.567: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.013466768s
Jun 23 07:18:03.567: INFO: Error evaluating pod condition running and ready: want pod 'busybox1' on 'nodes-us-central1-a-nk1s' to be 'Running' but was 'Pending'
Jun 23 07:18:05.564: INFO: Pod "busybox1": Phase="Running", Reason="", readiness=true. Elapsed: 12.010994364s
Jun 23 07:18:05.564: INFO: Pod "busybox1" satisfied condition "running and ready"
Jun 23 07:18:05.564: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [busybox1]
[It] should copy a file from a running Pod
  test/e2e/kubectl/kubectl.go:1537
STEP: specifying a remote filepath busybox1:/root/foo/bar/foo.bar on the pod
... skipping 24 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl copy
  test/e2e/kubectl/kubectl.go:1520
    should copy a file from a running Pod
    test/e2e/kubectl/kubectl.go:1537
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":8,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-9a5630c8-b207-4aa5-8b8f-27ae7d63ed12
STEP: Creating a pod to test consume secrets
Jun 23 07:18:00.710: INFO: Waiting up to 5m0s for pod "pod-secrets-9320c367-cee4-4a56-89f0-98e5d54b5bbb" in namespace "secrets-2215" to be "Succeeded or Failed"
Jun 23 07:18:00.716: INFO: Pod "pod-secrets-9320c367-cee4-4a56-89f0-98e5d54b5bbb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425157ms
Jun 23 07:18:02.752: INFO: Pod "pod-secrets-9320c367-cee4-4a56-89f0-98e5d54b5bbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04212038s
Jun 23 07:18:04.720: INFO: Pod "pod-secrets-9320c367-cee4-4a56-89f0-98e5d54b5bbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010664276s
Jun 23 07:18:06.721: INFO: Pod "pod-secrets-9320c367-cee4-4a56-89f0-98e5d54b5bbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010970025s
STEP: Saw pod success
Jun 23 07:18:06.721: INFO: Pod "pod-secrets-9320c367-cee4-4a56-89f0-98e5d54b5bbb" satisfied condition "Succeeded or Failed"
Jun 23 07:18:06.727: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-secrets-9320c367-cee4-4a56-89f0-98e5d54b5bbb container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:18:06.753: INFO: Waiting for pod pod-secrets-9320c367-cee4-4a56-89f0-98e5d54b5bbb to disappear
Jun 23 07:18:06.762: INFO: Pod pod-secrets-9320c367-cee4-4a56-89f0-98e5d54b5bbb no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.139 seconds]
[sig-storage] Secrets
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:8.615 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":214,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:07.386: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 77 lines ...
test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  test/e2e/common/node/lifecycle_hook.go:46
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods Extended
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 135 lines ...
test/e2e/node/framework.go:23
  Pod Container Status
  test/e2e/node/pods.go:202
    should never report container start when an init container fails
    test/e2e/node/pods.go:216
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails","total":-1,"completed":7,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:08.969: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 201 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      test/e2e/storage/testsuites/volume_expand.go:176
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":14,"skipped":91,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:09.604: INFO: Only supported for providers [openstack] (not gce)
... skipping 55 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:18:09.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conformance-tests-314" for this suite.

•
------------------------------
{"msg":"PASSED [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]","total":-1,"completed":15,"skipped":101,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:09.776: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 35 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:18:09.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-3618" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":7,"skipped":98,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:09.980: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 64 lines ...
Jun 23 07:18:07.691: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5266 explain e2e-test-crd-publish-openapi-6060-crds.spec'
Jun 23 07:18:07.918: INFO: stderr: ""
Jun 23 07:18:07.918: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-6060-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jun 23 07:18:07.918: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5266 explain e2e-test-crd-publish-openapi-6060-crds.spec.bars'
Jun 23 07:18:08.148: INFO: stderr: ""
Jun 23 07:18:08.148: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-6060-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   feeling\t<string>\n     Whether Bar is feeling great.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jun 23 07:18:08.148: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5266 explain e2e-test-crd-publish-openapi-6060-crds.spec.bars2'
Jun 23 07:18:08.363: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jun 23 07:18:11.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5266" for this suite.
... skipping 2 lines ...
• [SLOW TEST:10.294 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":2,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:11.598: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:187

... skipping 23 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Jun 23 07:18:10.035: INFO: Waiting up to 5m0s for pod "pod-65bde3af-e90c-4580-80d7-110688f13cce" in namespace "emptydir-7924" to be "Succeeded or Failed"
Jun 23 07:18:10.042: INFO: Pod "pod-65bde3af-e90c-4580-80d7-110688f13cce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.452624ms
Jun 23 07:18:12.050: INFO: Pod "pod-65bde3af-e90c-4580-80d7-110688f13cce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014284368s
Jun 23 07:18:14.049: INFO: Pod "pod-65bde3af-e90c-4580-80d7-110688f13cce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013263302s
STEP: Saw pod success
Jun 23 07:18:14.049: INFO: Pod "pod-65bde3af-e90c-4580-80d7-110688f13cce" satisfied condition "Succeeded or Failed"
Jun 23 07:18:14.053: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod pod-65bde3af-e90c-4580-80d7-110688f13cce container test-container: <nil>
STEP: delete the pod
Jun 23 07:18:14.080: INFO: Waiting for pod pod-65bde3af-e90c-4580-80d7-110688f13cce to disappear
Jun 23 07:18:14.085: INFO: Pod pod-65bde3af-e90c-4580-80d7-110688f13cce no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 07:18:14.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7924" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":8,"skipped":106,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:14.135: INFO: Only supported for providers [aws] (not gce)
... skipping 67 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating projection with secret that has name projected-secret-test-map-6944d1b5-c4c6-49bc-a1c0-c12188306a49
STEP: Creating a pod to test consume secrets
Jun 23 07:18:06.251: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c3a2db0c-b451-4c50-9238-66a9b583792c" in namespace "projected-6098" to be "Succeeded or Failed"
Jun 23 07:18:06.256: INFO: Pod "pod-projected-secrets-c3a2db0c-b451-4c50-9238-66a9b583792c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236641ms
Jun 23 07:18:08.260: INFO: Pod "pod-projected-secrets-c3a2db0c-b451-4c50-9238-66a9b583792c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008334115s
Jun 23 07:18:10.261: INFO: Pod "pod-projected-secrets-c3a2db0c-b451-4c50-9238-66a9b583792c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009164166s
Jun 23 07:18:12.260: INFO: Pod "pod-projected-secrets-c3a2db0c-b451-4c50-9238-66a9b583792c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009086265s
Jun 23 07:18:14.267: INFO: Pod "pod-projected-secrets-c3a2db0c-b451-4c50-9238-66a9b583792c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016029736s
STEP: Saw pod success
Jun 23 07:18:14.267: INFO: Pod "pod-projected-secrets-c3a2db0c-b451-4c50-9238-66a9b583792c" satisfied condition "Succeeded or Failed"
Jun 23 07:18:14.275: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod pod-projected-secrets-c3a2db0c-b451-4c50-9238-66a9b583792c container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:18:14.319: INFO: Waiting for pod pod-projected-secrets-c3a2db0c-b451-4c50-9238-66a9b583792c to disappear
Jun 23 07:18:14.327: INFO: Pod pod-projected-secrets-c3a2db0c-b451-4c50-9238-66a9b583792c no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:8.141 seconds]
[sig-storage] Projected secret
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":57,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:14.421: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 88 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl validation
  test/e2e/kubectl/kubectl.go:1033
    should create/apply a CR with unknown fields for CRD with no validation schema
    test/e2e/kubectl/kubectl.go:1034
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":12,"skipped":85,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-42cab257-9e14-407f-b117-ad3fe509b295
STEP: Creating a pod to test consume configMaps
Jun 23 07:18:11.674: INFO: Waiting up to 5m0s for pod "pod-configmaps-ad855b66-a263-43dc-b1ea-f78e460a6239" in namespace "configmap-6991" to be "Succeeded or Failed"
Jun 23 07:18:11.683: INFO: Pod "pod-configmaps-ad855b66-a263-43dc-b1ea-f78e460a6239": Phase="Pending", Reason="", readiness=false. Elapsed: 9.21028ms
Jun 23 07:18:13.695: INFO: Pod "pod-configmaps-ad855b66-a263-43dc-b1ea-f78e460a6239": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021509837s
Jun 23 07:18:15.711: INFO: Pod "pod-configmaps-ad855b66-a263-43dc-b1ea-f78e460a6239": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03787172s
STEP: Saw pod success
Jun 23 07:18:15.712: INFO: Pod "pod-configmaps-ad855b66-a263-43dc-b1ea-f78e460a6239" satisfied condition "Succeeded or Failed"
Jun 23 07:18:15.737: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod pod-configmaps-ad855b66-a263-43dc-b1ea-f78e460a6239 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:18:15.770: INFO: Waiting for pod pod-configmaps-ad855b66-a263-43dc-b1ea-f78e460a6239 to disappear
Jun 23 07:18:15.779: INFO: Pod pod-configmaps-ad855b66-a263-43dc-b1ea-f78e460a6239 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 07:18:15.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6991" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:15.831: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 73 lines ...
• [SLOW TEST:8.697 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":8,"skipped":41,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:17.810: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
... skipping 98 lines ...
Jun 23 07:18:09.510: INFO: PersistentVolumeClaim pvc-sxmq5 found but phase is Pending instead of Bound.
Jun 23 07:18:11.514: INFO: PersistentVolumeClaim pvc-sxmq5 found and phase=Bound (16.063304381s)
Jun 23 07:18:11.514: INFO: Waiting up to 3m0s for PersistentVolume local-59bkx to have phase Bound
Jun 23 07:18:11.518: INFO: PersistentVolume local-59bkx found and phase=Bound (3.151996ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-h4g8
STEP: Creating a pod to test exec-volume-test
Jun 23 07:18:11.532: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-h4g8" in namespace "volume-4661" to be "Succeeded or Failed"
Jun 23 07:18:11.541: INFO: Pod "exec-volume-test-preprovisionedpv-h4g8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.464966ms
Jun 23 07:18:13.551: INFO: Pod "exec-volume-test-preprovisionedpv-h4g8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019185052s
Jun 23 07:18:15.552: INFO: Pod "exec-volume-test-preprovisionedpv-h4g8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019932262s
Jun 23 07:18:17.547: INFO: Pod "exec-volume-test-preprovisionedpv-h4g8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014953407s
STEP: Saw pod success
Jun 23 07:18:17.547: INFO: Pod "exec-volume-test-preprovisionedpv-h4g8" satisfied condition "Succeeded or Failed"
Jun 23 07:18:17.551: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod exec-volume-test-preprovisionedpv-h4g8 container exec-container-preprovisionedpv-h4g8: <nil>
STEP: delete the pod
Jun 23 07:18:17.572: INFO: Waiting for pod exec-volume-test-preprovisionedpv-h4g8 to disappear
Jun 23 07:18:17.577: INFO: Pod exec-volume-test-preprovisionedpv-h4g8 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-h4g8
Jun 23 07:18:17.577: INFO: Deleting pod "exec-volume-test-preprovisionedpv-h4g8" in namespace "volume-4661"
... skipping 19 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":98,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:17.963: INFO: Only supported for providers [vsphere] (not gce)
... skipping 196 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should store data
      test/e2e/storage/testsuites/volumes.go:161
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":69,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":33,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:18:02.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 59 lines ...
• [SLOW TEST:8.198 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
  test/e2e/apimachinery/resource_quota.go:1446
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.","total":-1,"completed":4,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:24.104: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 57 lines ...
STEP: Destroying namespace "apply-9579" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  test/e2e/apimachinery/apply.go:59

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":5,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:24.430: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/framework/framework.go:187

... skipping 79 lines ...
Jun 23 07:18:04.442: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-6611 create -f -'
Jun 23 07:18:05.221: INFO: stderr: ""
Jun 23 07:18:05.221: INFO: stdout: "pod/httpd created\n"
Jun 23 07:18:05.221: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 07:18:05.221: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-6611" to be "running and ready"
Jun 23 07:18:05.241: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 19.354787ms
Jun 23 07:18:05.241: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-m5w1' to be 'Running' but was 'Pending'
Jun 23 07:18:07.245: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.024150457s
Jun 23 07:18:07.245: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-m5w1' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  }]
Jun 23 07:18:09.256: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.034328681s
Jun 23 07:18:09.256: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-m5w1' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  }]
Jun 23 07:18:11.246: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.02430863s
Jun 23 07:18:11.246: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-m5w1' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  }]
Jun 23 07:18:13.258: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.036407544s
Jun 23 07:18:13.258: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-m5w1' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  }]
Jun 23 07:18:15.254: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.032436306s
Jun 23 07:18:15.254: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-m5w1' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  }]
Jun 23 07:18:17.244: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.023115395s
Jun 23 07:18:17.244: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-m5w1' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:05 +0000 UTC  }]
Jun 23 07:18:19.244: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 14.023019278s
Jun 23 07:18:19.244: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 07:18:19.244: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] running a failing command
  test/e2e/kubectl/kubectl.go:547
Jun 23 07:18:19.244: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-6611 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-2 --restart=Never --pod-running-timeout=2m0s failure-1 -- /bin/sh -c exit 42'
... skipping 23 lines ...
  test/e2e/kubectl/kubectl.go:407
    should return command exit codes
    test/e2e/kubectl/kubectl.go:527
      running a failing command
      test/e2e/kubectl/kubectl.go:547
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","total":-1,"completed":11,"skipped":160,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:26.314: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 252 lines ...
Jun 23 07:18:25.011: INFO: PersistentVolumeClaim pvc-2lbrr found but phase is Pending instead of Bound.
Jun 23 07:18:27.018: INFO: PersistentVolumeClaim pvc-2lbrr found and phase=Bound (10.072745262s)
Jun 23 07:18:27.018: INFO: Waiting up to 3m0s for PersistentVolume local-bxzcq to have phase Bound
Jun 23 07:18:27.022: INFO: PersistentVolume local-bxzcq found and phase=Bound (4.099834ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zn2g
STEP: Creating a pod to test subpath
Jun 23 07:18:27.036: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zn2g" in namespace "provisioning-4791" to be "Succeeded or Failed"
Jun 23 07:18:27.046: INFO: Pod "pod-subpath-test-preprovisionedpv-zn2g": Phase="Pending", Reason="", readiness=false. Elapsed: 9.752477ms
Jun 23 07:18:29.051: INFO: Pod "pod-subpath-test-preprovisionedpv-zn2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014758996s
Jun 23 07:18:31.050: INFO: Pod "pod-subpath-test-preprovisionedpv-zn2g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01393259s
Jun 23 07:18:33.051: INFO: Pod "pod-subpath-test-preprovisionedpv-zn2g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014394898s
STEP: Saw pod success
Jun 23 07:18:33.051: INFO: Pod "pod-subpath-test-preprovisionedpv-zn2g" satisfied condition "Succeeded or Failed"
Jun 23 07:18:33.055: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-subpath-test-preprovisionedpv-zn2g container test-container-volume-preprovisionedpv-zn2g: <nil>
STEP: delete the pod
Jun 23 07:18:33.087: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zn2g to disappear
Jun 23 07:18:33.093: INFO: Pod pod-subpath-test-preprovisionedpv-zn2g no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zn2g
Jun 23 07:18:33.093: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zn2g" in namespace "provisioning-4791"
... skipping 30 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:33.468: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 109 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Only supported for providers [openstack] (not gce)

      test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 123 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      test/e2e/storage/testsuites/ephemeral.go:315
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":6,"skipped":37,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:34.000: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
... skipping 38 lines ...
• [SLOW TEST:16.084 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  test/e2e/apps/job.go:81
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":11,"skipped":105,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:18:34.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir volume type on node default medium
Jun 23 07:18:34.056: INFO: Waiting up to 5m0s for pod "pod-ed6dcc54-4d52-402f-9a17-ca5c84504c9d" in namespace "emptydir-4792" to be "Succeeded or Failed"
Jun 23 07:18:34.060: INFO: Pod "pod-ed6dcc54-4d52-402f-9a17-ca5c84504c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390609ms
Jun 23 07:18:36.064: INFO: Pod "pod-ed6dcc54-4d52-402f-9a17-ca5c84504c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008423364s
Jun 23 07:18:38.066: INFO: Pod "pod-ed6dcc54-4d52-402f-9a17-ca5c84504c9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010605345s
STEP: Saw pod success
Jun 23 07:18:38.067: INFO: Pod "pod-ed6dcc54-4d52-402f-9a17-ca5c84504c9d" satisfied condition "Succeeded or Failed"
Jun 23 07:18:38.070: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-ed6dcc54-4d52-402f-9a17-ca5c84504c9d container test-container: <nil>
STEP: delete the pod
Jun 23 07:18:38.092: INFO: Waiting for pod pod-ed6dcc54-4d52-402f-9a17-ca5c84504c9d to disappear
Jun 23 07:18:38.103: INFO: Pod pod-ed6dcc54-4d52-402f-9a17-ca5c84504c9d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 50 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    should validate Statefulset Status endpoints [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:18:23.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 79 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":42,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":9,"skipped":51,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":8,"skipped":33,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:38.171: INFO: Only supported for providers [vsphere] (not gce)
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name projected-secret-test-1053f76c-706e-4f0c-890f-d2ae5c8bab03
STEP: Creating a pod to test consume secrets
Jun 23 07:18:34.163: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2069df3a-cf65-4b9c-ba3c-b8e0f2179161" in namespace "projected-7501" to be "Succeeded or Failed"
Jun 23 07:18:34.178: INFO: Pod "pod-projected-secrets-2069df3a-cf65-4b9c-ba3c-b8e0f2179161": Phase="Pending", Reason="", readiness=false. Elapsed: 14.300493ms
Jun 23 07:18:36.182: INFO: Pod "pod-projected-secrets-2069df3a-cf65-4b9c-ba3c-b8e0f2179161": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019190158s
Jun 23 07:18:38.183: INFO: Pod "pod-projected-secrets-2069df3a-cf65-4b9c-ba3c-b8e0f2179161": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01933216s
STEP: Saw pod success
Jun 23 07:18:38.183: INFO: Pod "pod-projected-secrets-2069df3a-cf65-4b9c-ba3c-b8e0f2179161" satisfied condition "Succeeded or Failed"
Jun 23 07:18:38.187: INFO: Trying to get logs from node nodes-us-central1-a-tdxw pod pod-projected-secrets-2069df3a-cf65-4b9c-ba3c-b8e0f2179161 container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:18:38.236: INFO: Waiting for pod pod-projected-secrets-2069df3a-cf65-4b9c-ba3c-b8e0f2179161 to disappear
Jun 23 07:18:38.249: INFO: Pod pod-projected-secrets-2069df3a-cf65-4b9c-ba3c-b8e0f2179161 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
Jun 23 07:18:38.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7501" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":106,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:38.295: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 275 lines ...
• [SLOW TEST:32.225 seconds]
[sig-storage] PVC Protection
test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  test/e2e/storage/pvc_protection.go:147
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":16,"skipped":106,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:42.025: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/framework/framework.go:187

... skipping 129 lines ...
test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  test/e2e/storage/csi_mock_volume.go:332
    should require VolumeAttach for ephemermal volume and drivers with attachment
    test/e2e/storage/csi_mock_volume.go:360
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment","total":-1,"completed":14,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 31 lines ...
Jun 23 07:18:25.176: INFO: PersistentVolumeClaim pvc-xgtz9 found but phase is Pending instead of Bound.
Jun 23 07:18:27.181: INFO: PersistentVolumeClaim pvc-xgtz9 found and phase=Bound (14.087170728s)
Jun 23 07:18:27.181: INFO: Waiting up to 3m0s for PersistentVolume local-b6d5l to have phase Bound
Jun 23 07:18:27.185: INFO: PersistentVolume local-b6d5l found and phase=Bound (4.255762ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xdkl
STEP: Creating a pod to test subpath
Jun 23 07:18:27.197: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xdkl" in namespace "provisioning-7871" to be "Succeeded or Failed"
Jun 23 07:18:27.201: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.799556ms
Jun 23 07:18:29.209: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011583237s
Jun 23 07:18:31.206: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008674527s
Jun 23 07:18:33.205: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007896882s
Jun 23 07:18:35.207: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009988773s
Jun 23 07:18:37.209: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.011642298s
STEP: Saw pod success
Jun 23 07:18:37.209: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl" satisfied condition "Succeeded or Failed"
Jun 23 07:18:37.213: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-subpath-test-preprovisionedpv-xdkl container test-container-subpath-preprovisionedpv-xdkl: <nil>
STEP: delete the pod
Jun 23 07:18:37.241: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xdkl to disappear
Jun 23 07:18:37.249: INFO: Pod pod-subpath-test-preprovisionedpv-xdkl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xdkl
Jun 23 07:18:37.249: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xdkl" in namespace "provisioning-7871"
STEP: Creating pod pod-subpath-test-preprovisionedpv-xdkl
STEP: Creating a pod to test subpath
Jun 23 07:18:37.277: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xdkl" in namespace "provisioning-7871" to be "Succeeded or Failed"
Jun 23 07:18:37.290: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.947547ms
Jun 23 07:18:39.302: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024449609s
Jun 23 07:18:41.295: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018220531s
Jun 23 07:18:43.306: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029354593s
STEP: Saw pod success
Jun 23 07:18:43.307: INFO: Pod "pod-subpath-test-preprovisionedpv-xdkl" satisfied condition "Succeeded or Failed"
Jun 23 07:18:43.313: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-subpath-test-preprovisionedpv-xdkl container test-container-subpath-preprovisionedpv-xdkl: <nil>
STEP: delete the pod
Jun 23 07:18:43.356: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xdkl to disappear
Jun 23 07:18:43.364: INFO: Pod pod-subpath-test-preprovisionedpv-xdkl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xdkl
Jun 23 07:18:43.364: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xdkl" in namespace "provisioning-7871"
... skipping 26 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      test/e2e/storage/testsuites/subpath.go:397
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:43.929: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 182 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:194

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 130 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:18:44.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2426" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":7,"skipped":91,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:44.489: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":11,"skipped":90,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:18:41.815: INFO: >>> kubeConfig: /root/.kube/config
... skipping 3 lines ...
[It] should support non-existent path
  test/e2e/storage/testsuites/subpath.go:196
Jun 23 07:18:41.839: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 07:18:41.844: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-5gd8
STEP: Creating a pod to test subpath
Jun 23 07:18:41.853: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-5gd8" in namespace "provisioning-2114" to be "Succeeded or Failed"
Jun 23 07:18:41.858: INFO: Pod "pod-subpath-test-inlinevolume-5gd8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.192083ms
Jun 23 07:18:43.881: INFO: Pod "pod-subpath-test-inlinevolume-5gd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028409913s
Jun 23 07:18:45.864: INFO: Pod "pod-subpath-test-inlinevolume-5gd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011502607s
Jun 23 07:18:47.863: INFO: Pod "pod-subpath-test-inlinevolume-5gd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009965264s
STEP: Saw pod success
Jun 23 07:18:47.863: INFO: Pod "pod-subpath-test-inlinevolume-5gd8" satisfied condition "Succeeded or Failed"
Jun 23 07:18:47.868: INFO: Trying to get logs from node nodes-us-central1-a-m5w1 pod pod-subpath-test-inlinevolume-5gd8 container test-container-volume-inlinevolume-5gd8: <nil>
STEP: delete the pod
Jun 23 07:18:47.897: INFO: Waiting for pod pod-subpath-test-inlinevolume-5gd8 to disappear
Jun 23 07:18:47.900: INFO: Pod pod-subpath-test-inlinevolume-5gd8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-5gd8
Jun 23 07:18:47.900: INFO: Deleting pod "pod-subpath-test-inlinevolume-5gd8" in namespace "provisioning-2114"
... skipping 12 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":12,"skipped":90,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:47.952: INFO: Only supported for providers [vsphere] (not gce)
... skipping 98 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      test/e2e/storage/testsuites/volumemode.go:354
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:49.023: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:187

... skipping 67 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Probing container
  test/e2e/common/node/container_probe.go:59
[It] should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
  test/e2e/common/node/container_probe.go:623
Jun 23 07:17:22.319: INFO: Waiting up to 5m0s for all pods (need at least 1) in namespace 'container-probe-4421' to be running and ready
Jun 23 07:17:22.344: INFO: The status of Pod probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 07:17:22.344: INFO: 0 / 1 pods in namespace 'container-probe-4421' are running and ready (0 seconds elapsed)
Jun 23 07:17:22.344: INFO: expected 0 pod replicas in namespace 'container-probe-4421', 0 are Running and Ready.
Jun 23 07:17:22.344: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 07:17:22.344: INFO: probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc  nodes-us-central1-a-tdxw  Pending         [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC  }]
Jun 23 07:17:22.344: INFO: 
Jun 23 07:17:24.355: INFO: The status of Pod probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 07:17:24.355: INFO: 0 / 1 pods in namespace 'container-probe-4421' are running and ready (2 seconds elapsed)
Jun 23 07:17:24.355: INFO: expected 0 pod replicas in namespace 'container-probe-4421', 0 are Running and Ready.
Jun 23 07:17:24.355: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 07:17:24.355: INFO: probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc  nodes-us-central1-a-tdxw  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC ContainersNotReady containers with unready status: [probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC ContainersNotReady containers with unready status: [probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC  }]
Jun 23 07:17:24.355: INFO: 
Jun 23 07:17:26.365: INFO: The status of Pod probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 07:17:26.365: INFO: 0 / 1 pods in namespace 'container-probe-4421' are running and ready (4 seconds elapsed)
Jun 23 07:17:26.365: INFO: expected 0 pod replicas in namespace 'container-probe-4421', 0 are Running and Ready.
Jun 23 07:17:26.365: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 07:17:26.365: INFO: probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc  nodes-us-central1-a-tdxw  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC ContainersNotReady containers with unready status: [probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC ContainersNotReady containers with unready status: [probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC  }]
Jun 23 07:17:26.365: INFO: 
Jun 23 07:17:28.356: INFO: The status of Pod probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 07:17:28.356: INFO: 0 / 1 pods in namespace 'container-probe-4421' are running and ready (6 seconds elapsed)
Jun 23 07:17:28.356: INFO: expected 0 pod replicas in namespace 'container-probe-4421', 0 are Running and Ready.
Jun 23 07:17:28.356: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 07:17:28.356: INFO: probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc  nodes-us-central1-a-tdxw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC ContainersNotReady containers with unready status: [probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC ContainersNotReady containers with unready status: [probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC  }]
Jun 23 07:17:28.356: INFO: 
Jun 23 07:17:30.357: INFO: The status of Pod probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 07:17:30.357: INFO: 0 / 1 pods in namespace 'container-probe-4421' are running and ready (8 seconds elapsed)
Jun 23 07:17:30.357: INFO: expected 0 pod replicas in namespace 'container-probe-4421', 0 are Running and Ready.
Jun 23 07:17:30.357: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 07:17:30.357: INFO: probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc  nodes-us-central1-a-tdxw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC ContainersNotReady containers with unready status: [probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC ContainersNotReady containers with unready status: [probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC  }]
Jun 23 07:17:30.357: INFO: 
Jun 23 07:17:32.373: INFO: The status of Pod probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 07:17:32.373: INFO: 0 / 1 pods in namespace 'container-probe-4421' are running and ready (10 seconds elapsed)
Jun 23 07:17:32.373: INFO: expected 0 pod replicas in namespace 'container-probe-4421', 0 are Running and Ready.
Jun 23 07:17:32.373: INFO: POD                                              NODE                      PHASE    GRACE  CONDITIONS
Jun 23 07:17:32.373: INFO: probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc  nodes-us-central1-a-tdxw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC ContainersNotReady containers with unready status: [probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC ContainersNotReady containers with unready status: [probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:17:22 +0000 UTC  }]
Jun 23 07:17:32.373: INFO: 
Jun 23 07:17:34.368: INFO: 1 / 1 pods in namespace 'container-probe-4421' are running and ready (12 seconds elapsed)
... skipping 7 lines ...
• [SLOW TEST:88.121 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
  test/e2e/common/node/container_probe.go:623
------------------------------
{"msg":"PASSED [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating","total":-1,"completed":9,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:50.422: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/framework/framework.go:187

... skipping 29 lines ...
Jun 23 07:18:38.283: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-6233 create -f -'
Jun 23 07:18:39.401: INFO: stderr: ""
Jun 23 07:18:39.401: INFO: stdout: "pod/httpd created\n"
Jun 23 07:18:39.401: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 07:18:39.401: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-6233" to be "running and ready"
Jun 23 07:18:39.406: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.93849ms
Jun 23 07:18:39.406: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-tdxw' to be 'Running' but was 'Pending'
Jun 23 07:18:41.413: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011471164s
Jun 23 07:18:41.413: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-tdxw' to be 'Running' but was 'Pending'
Jun 23 07:18:43.412: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010921068s
Jun 23 07:18:43.412: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-central1-a-tdxw' to be 'Running' but was 'Pending'
Jun 23 07:18:45.412: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.011057154s
Jun 23 07:18:45.412: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-tdxw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC  }]
Jun 23 07:18:47.412: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.010730194s
Jun 23 07:18:47.412: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-tdxw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC  }]
Jun 23 07:18:49.412: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.011007111s
Jun 23 07:18:49.412: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-central1-a-tdxw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 07:18:39 +0000 UTC  }]
Jun 23 07:18:51.411: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.009848874s
Jun 23 07:18:51.411: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 07:18:51.411: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support exec using resource/name
  test/e2e/kubectl/kubectl.go:459
STEP: executing a command in the container
... skipping 23 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:407
    should support exec using resource/name
    test/e2e/kubectl/kubectl.go:459
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":9,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:51.921: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 121 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:211
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:240
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":103,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:53.386: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 79 lines ...
• [SLOW TEST:83.002 seconds]
[sig-storage] Projected secret
test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":84,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:54.117: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 319 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should store data
      test/e2e/storage/testsuites/volumes.go:161
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":9,"skipped":47,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:54.942: INFO: Only supported for providers [vsphere] (not gce)
... skipping 141 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:18:56.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3593" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":12,"skipped":173,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 85 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      test/e2e/storage/testsuites/volumemode.go:354
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:57.421: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 43 lines ...
Jun 23 07:18:27.604: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:27.611: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:27.615: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:27.623: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:27.645: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:27.653: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:27.653: INFO: Lookups using dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local]

Jun 23 07:18:32.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:32.666: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:32.671: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:32.676: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:32.692: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:32.696: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:32.701: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:32.706: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:32.706: INFO: Lookups using dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local]

Jun 23 07:18:37.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:37.675: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:37.683: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:37.695: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:37.702: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:37.707: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:37.715: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:37.719: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:37.719: INFO: Lookups using dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local]

Jun 23 07:18:42.669: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:42.685: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:42.700: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:42.709: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:42.714: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:42.721: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:42.730: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:42.741: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:42.741: INFO: Lookups using dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local]

Jun 23 07:18:47.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:47.665: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:47.675: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:47.681: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:47.686: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:47.691: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:47.696: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:47.704: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:47.704: INFO: Lookups using dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local]

Jun 23 07:18:52.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:52.665: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:52.670: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:52.674: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:52.678: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:52.684: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:52.688: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:52.693: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local from pod dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de: the server could not find the requested resource (get pods dns-test-c5d55c9a-15f8-4624-be68-14942194c7de)
Jun 23 07:18:52.693: INFO: Lookups using dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5491.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5491.svc.cluster.local jessie_udp@dns-test-service-2.dns-5491.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5491.svc.cluster.local]

Jun 23 07:18:57.708: INFO: DNS probes using dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:36.283 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":5,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:18:57.802: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 49 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-projected-all-test-volume-9e619863-9be1-4d94-8e69-ad5f6631a771
STEP: Creating secret with name secret-projected-all-test-volume-d9cc0aab-850f-4e2b-a3cd-877bc5d845e9
STEP: Creating a pod to test Check all projections for projected volume plugin
Jun 23 07:18:52.021: INFO: Waiting up to 5m0s for pod "projected-volume-a28074e3-012c-463d-8611-9c82ec25ec00" in namespace "projected-865" to be "Succeeded or Failed"
Jun 23 07:18:52.026: INFO: Pod "projected-volume-a28074e3-012c-463d-8611-9c82ec25ec00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689621ms
Jun 23 07:18:54.033: INFO: Pod "projected-volume-a28074e3-012c-463d-8611-9c82ec25ec00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011560416s
Jun 23 07:18:56.032: INFO: Pod "projected-volume-a28074e3-012c-463d-8611-9c82ec25ec00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010205s
Jun 23 07:18:58.031: INFO: Pod "projected-volume-a28074e3-012c-463d-8611-9c82ec25ec00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009387881s
Jun 23 07:19:00.031: INFO: Pod "projected-volume-a28074e3-012c-463d-8611-9c82ec25ec00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.009677991s
STEP: Saw pod success
Jun 23 07:19:00.031: INFO: Pod "projected-volume-a28074e3-012c-463d-8611-9c82ec25ec00" satisfied condition "Succeeded or Failed"
Jun 23 07:19:00.041: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod projected-volume-a28074e3-012c-463d-8611-9c82ec25ec00 container projected-all-volume-test: <nil>
STEP: delete the pod
Jun 23 07:19:00.084: INFO: Waiting for pod projected-volume-a28074e3-012c-463d-8611-9c82ec25ec00 to disappear
Jun 23 07:19:00.090: INFO: Pod projected-volume-a28074e3-012c-463d-8611-9c82ec25ec00 no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:187
... skipping 15 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/configmap_volume.go:112
STEP: Creating configMap with name configmap-test-volume-map-03a519a7-6444-4800-981c-a04ff3e20831
STEP: Creating a pod to test consume configMaps
Jun 23 07:18:54.201: INFO: Waiting up to 5m0s for pod "pod-configmaps-fdbcd3d3-320d-4e57-a542-5b7403f3e27f" in namespace "configmap-4470" to be "Succeeded or Failed"
Jun 23 07:18:54.214: INFO: Pod "pod-configmaps-fdbcd3d3-320d-4e57-a542-5b7403f3e27f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.644605ms
Jun 23 07:18:56.218: INFO: Pod "pod-configmaps-fdbcd3d3-320d-4e57-a542-5b7403f3e27f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016672915s
Jun 23 07:18:58.219: INFO: Pod "pod-configmaps-fdbcd3d3-320d-4e57-a542-5b7403f3e27f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01702596s
Jun 23 07:19:00.220: INFO: Pod "pod-configmaps-fdbcd3d3-320d-4e57-a542-5b7403f3e27f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018782155s
STEP: Saw pod success
Jun 23 07:19:00.220: INFO: Pod "pod-configmaps-fdbcd3d3-320d-4e57-a542-5b7403f3e27f" satisfied condition "Succeeded or Failed"
Jun 23 07:19:00.223: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod pod-configmaps-fdbcd3d3-320d-4e57-a542-5b7403f3e27f container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:19:00.245: INFO: Waiting for pod pod-configmaps-fdbcd3d3-320d-4e57-a542-5b7403f3e27f to disappear
Jun 23 07:19:00.256: INFO: Pod pod-configmaps-fdbcd3d3-320d-4e57-a542-5b7403f3e27f no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.143 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/storage/configmap_volume.go:112
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":86,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:00.299: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 22 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jun 23 07:18:57.471: INFO: Waiting up to 5m0s for pod "downward-api-ea705ed9-68d3-4297-94f8-617e5ba6fead" in namespace "downward-api-4777" to be "Succeeded or Failed"
Jun 23 07:18:57.494: INFO: Pod "downward-api-ea705ed9-68d3-4297-94f8-617e5ba6fead": Phase="Pending", Reason="", readiness=false. Elapsed: 22.752097ms
Jun 23 07:18:59.498: INFO: Pod "downward-api-ea705ed9-68d3-4297-94f8-617e5ba6fead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026749686s
Jun 23 07:19:01.500: INFO: Pod "downward-api-ea705ed9-68d3-4297-94f8-617e5ba6fead": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029252733s
Jun 23 07:19:03.499: INFO: Pod "downward-api-ea705ed9-68d3-4297-94f8-617e5ba6fead": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027774506s
STEP: Saw pod success
Jun 23 07:19:03.499: INFO: Pod "downward-api-ea705ed9-68d3-4297-94f8-617e5ba6fead" satisfied condition "Succeeded or Failed"
Jun 23 07:19:03.502: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod downward-api-ea705ed9-68d3-4297-94f8-617e5ba6fead container dapi-container: <nil>
STEP: delete the pod
Jun 23 07:19:03.530: INFO: Waiting for pod downward-api-ea705ed9-68d3-4297-94f8-617e5ba6fead to disappear
Jun 23 07:19:03.534: INFO: Pod downward-api-ea705ed9-68d3-4297-94f8-617e5ba6fead no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:6.108 seconds]
[sig-node] Downward API
test/e2e/common/node/framework.go:23
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":52,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:19:00.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
• [SLOW TEST:8.823 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":11,"skipped":52,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Pods
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 12 lines ...
Jun 23 07:18:55.506: INFO: The phase of Pod server-envvars-1d9b7e63-62ee-4742-83a5-86ba308486a2 is Pending, waiting for it to be Running (with Ready = true)
Jun 23 07:18:57.533: INFO: Pod "server-envvars-1d9b7e63-62ee-4742-83a5-86ba308486a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036411331s
Jun 23 07:18:57.533: INFO: The phase of Pod server-envvars-1d9b7e63-62ee-4742-83a5-86ba308486a2 is Pending, waiting for it to be Running (with Ready = true)
Jun 23 07:18:59.505: INFO: Pod "server-envvars-1d9b7e63-62ee-4742-83a5-86ba308486a2": Phase="Running", Reason="", readiness=true. Elapsed: 6.008942057s
Jun 23 07:18:59.505: INFO: The phase of Pod server-envvars-1d9b7e63-62ee-4742-83a5-86ba308486a2 is Running (Ready = true)
Jun 23 07:18:59.505: INFO: Pod "server-envvars-1d9b7e63-62ee-4742-83a5-86ba308486a2" satisfied condition "running and ready"
Jun 23 07:18:59.538: INFO: Waiting up to 5m0s for pod "client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72" in namespace "pods-3081" to be "Succeeded or Failed"
Jun 23 07:18:59.544: INFO: Pod "client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.286151ms
Jun 23 07:19:01.548: INFO: Pod "client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010646035s
Jun 23 07:19:03.549: INFO: Pod "client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011001864s
Jun 23 07:19:05.557: INFO: Pod "client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019376094s
Jun 23 07:19:07.548: INFO: Pod "client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010467674s
Jun 23 07:19:09.552: INFO: Pod "client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014211484s
STEP: Saw pod success
Jun 23 07:19:09.552: INFO: Pod "client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72" satisfied condition "Succeeded or Failed"
Jun 23 07:19:09.557: INFO: Trying to get logs from node nodes-us-central1-a-nk1s pod client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72 container env3cont: <nil>
STEP: delete the pod
Jun 23 07:19:09.596: INFO: Waiting for pod client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72 to disappear
Jun 23 07:19:09.613: INFO: Pod client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72 no longer exists
[AfterEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
... skipping 4 lines ...
• [SLOW TEST:16.170 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":120,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:09.643: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 183 lines ...
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      test/e2e/storage/testsuites/volume_expand.go:252
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":12,"skipped":85,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:10.453: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 297 lines ...
  test/e2e/storage/persistent_volumes-local.go:194
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:256
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:257
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":17,"skipped":107,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:10.970: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 56 lines ...
• [SLOW TEST:17.174 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":13,"skipped":174,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:13.819: INFO: Only supported for providers [azure] (not gce)
... skipping 104 lines ...
• [SLOW TEST:7.442 seconds]
[sig-node] KubeletManagedEtcHosts
test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":57,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:17:35.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should delete failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:291
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-5091" for this suite.


• [SLOW TEST:102.156 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:291
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:17.374: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:187

... skipping 202 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:19:17.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-3084" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":4,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:17.683: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 186 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:19:17.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-5229" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":5,"skipped":67,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 45 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 23 07:19:06.746: INFO: File wheezy_udp@dns-test-service-3.dns-5777.svc.cluster.local from pod  dns-5777/dns-test-774cba79-daed-4b92-ae4d-44685adb0e1e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 07:19:06.751: INFO: File jessie_udp@dns-test-service-3.dns-5777.svc.cluster.local from pod  dns-5777/dns-test-774cba79-daed-4b92-ae4d-44685adb0e1e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 07:19:06.751: INFO: Lookups using dns-5777/dns-test-774cba79-daed-4b92-ae4d-44685adb0e1e failed for: [wheezy_udp@dns-test-service-3.dns-5777.svc.cluster.local jessie_udp@dns-test-service-3.dns-5777.svc.cluster.local]

Jun 23 07:19:11.765: INFO: DNS probes using dns-test-774cba79-daed-4b92-ae4d-44685adb0e1e succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5777.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5777.svc.cluster.local; sleep 1; done
... skipping 25 lines ...
test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":13,"skipped":134,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:17.979: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 46 lines ...
Jun 23 07:19:08.528: INFO: PersistentVolumeClaim pvc-lw65h found but phase is Pending instead of Bound.
Jun 23 07:19:10.531: INFO: PersistentVolumeClaim pvc-lw65h found and phase=Bound (4.016069576s)
Jun 23 07:19:10.531: INFO: Waiting up to 3m0s for PersistentVolume local-rgqvl to have phase Bound
Jun 23 07:19:10.534: INFO: PersistentVolume local-rgqvl found and phase=Bound (2.634645ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-qr27
STEP: Creating a pod to test exec-volume-test
Jun 23 07:19:10.547: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-qr27" in namespace "volume-8131" to be "Succeeded or Failed"
Jun 23 07:19:10.570: INFO: Pod "exec-volume-test-preprovisionedpv-qr27": Phase="Pending", Reason="", readiness=false. Elapsed: 23.012228ms
Jun 23 07:19:12.576: INFO: Pod "exec-volume-test-preprovisionedpv-qr27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028997103s
Jun 23 07:19:14.576: INFO: Pod "exec-volume-test-preprovisionedpv-qr27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028918227s
Jun 23 07:19:16.575: INFO: Pod "exec-volume-test-preprovisionedpv-qr27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027615535s
Jun 23 07:19:18.575: INFO: Pod "exec-volume-test-preprovisionedpv-qr27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.028366266s
STEP: Saw pod success
Jun 23 07:19:18.575: INFO: Pod "exec-volume-test-preprovisionedpv-qr27" satisfied condition "Succeeded or Failed"
Jun 23 07:19:18.581: INFO: Trying to get logs from node nodes-us-central1-a-50vm pod exec-volume-test-preprovisionedpv-qr27 container exec-container-preprovisionedpv-qr27: <nil>
STEP: delete the pod
Jun 23 07:19:18.599: INFO: Waiting for pod exec-volume-test-preprovisionedpv-qr27 to disappear
Jun 23 07:19:18.603: INFO: Pod exec-volume-test-preprovisionedpv-qr27 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-qr27
Jun 23 07:19:18.603: INFO: Deleting pod "exec-volume-test-preprovisionedpv-qr27" in namespace "volume-8131"
... skipping 19 lines ...
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":89,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:18.897: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:187

... skipping 30 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:19:19.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4636" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":10,"skipped":94,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:19.132: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 73 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:19:22.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ephemeral-containers-test-4023" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] will start an ephemeral container in an existing pod","total":-1,"completed":14,"skipped":138,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 10 lines ...
Jun 23 07:19:22.364: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 23 07:19:22.364: INFO: stdout: "scheduler etcd-0 controller-manager etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
Jun 23 07:19:22.364: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-184 get componentstatuses scheduler'
Jun 23 07:19:22.463: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 23 07:19:22.463: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of etcd-0
Jun 23 07:19:22.463: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-184 get componentstatuses etcd-0'
Jun 23 07:19:22.565: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 23 07:19:22.565: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-0   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
STEP: getting status of controller-manager
Jun 23 07:19:22.565: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-184 get componentstatuses controller-manager'
Jun 23 07:19:22.681: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 23 07:19:22.682: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-1
Jun 23 07:19:22.682: INFO: Running '/logs/artifacts/9e3e8584-f2c2-11ec-8dfe-daa417708791/kubectl --server=https://35.225.255.125 --kubeconfig=/root/.kube/config --namespace=kubectl-184 get componentstatuses etcd-1'
Jun 23 07:19:22.791: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 23 07:19:22.791: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-1   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
Jun 23 07:19:22.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-184" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":15,"skipped":140,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:22.818: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 94 lines ...
• [SLOW TEST:80.323 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should be restarted startup probe fails
  test/e2e/common/node/container_probe.go:317
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":11,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 105 lines ...
test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  test/e2e/common/network/networking.go:32
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":57,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 122 lines ...
test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  test/e2e/storage/csi_mock_volume.go:1636
    should modify fsGroup if fsGroupPolicy=File
    test/e2e/storage/csi_mock_volume.go:1660
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":13,"skipped":94,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:26.452: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:187

... skipping 56 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:19:27.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6339" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":16,"skipped":62,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
Jun 23 07:19:27.728: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 115 lines ...
Jun 23 07:18:55.309: INFO: PersistentVolumeClaim pvc-4n4jj found but phase is Pending instead of Bound.
Jun 23 07:18:57.316: INFO: PersistentVolumeClaim pvc-4n4jj found and phase=Bound (6.05278518s)
Jun 23 07:18:57.316: INFO: Waiting up to 3m0s for PersistentVolume local-6l8xf to have phase Bound
Jun 23 07:18:57.327: INFO: PersistentVolume local-6l8xf found and phase=Bound (10.860644ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xg94
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 07:18:57.366: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xg94" in namespace "provisioning-1975" to be "Succeeded or Failed"
Jun 23 07:18:57.376: INFO: Pod "pod-subpath-test-preprovisionedpv-xg94": Phase="Pending", Reason="", readiness=false. Elapsed: 9.932743ms
Jun 23 07:18:59.383: INFO: Pod "pod-subpath-test-preprovisionedpv-xg94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017162907s
Jun 23 07:19:01.382: INFO: Pod "pod-subpath-test-preprovisionedpv-xg94": Phase="Running", Reason="", readiness=true. Elapsed: 4.016153687s
Jun 23 07:19:03.387: INFO: Pod "pod-subpath-test-preprovisionedpv-xg94": Phase="Running", Reason="", readiness=true. Elapsed: 6.021427467s
Jun 23 07:19:05.390: INFO: Pod "pod-subpath-test-preprovisionedpv-xg94": Phase="Running", Reason="", readiness=true. Elapsed: 8.024043822s
Jun 23 07:19:07.397: INFO: Pod "pod-subpath-test-preprovisionedpv-xg94": Phase="
... skipping 30599 lines ...






   11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-gk4dh\" portCount=1\nI0623 07:24:10.551788      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-bfvch\" portCount=1\nI0623 07:24:10.605898      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-ksxbc\" portCount=1\nI0623 07:24:10.673417      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-2r9mp\" portCount=1\nI0623 07:24:10.688887      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-l4mbt\" portCount=1\nI0623 07:24:10.751578      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-nd8wb\" portCount=1\nI0623 07:24:10.792529      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-25wdv\" portCount=1\nI0623 07:24:10.877312      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-jlv9h\" portCount=1\nI0623 07:24:10.912513      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-4mcp6\" portCount=1\nI0623 07:24:10.957961      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-j4mqs\" portCount=1\nI0623 07:24:10.958387      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-r6c6m\" servicePort=\"100.70.23.83:80/TCP\"\nI0623 07:24:10.958556      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-47nd6\" servicePort=\"100.69.116.25:80/TCP\"\nI0623 07:24:10.958585      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-l4mbt\" servicePort=\"100.70.144.255:80/TCP\"\nI0623 07:24:10.958607      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-25wdv\" servicePort=\"100.67.136.98:80/TCP\"\nI0623 07:24:10.958628      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-jlv9h\" servicePort=\"100.65.49.91:80/TCP\"\nI0623 07:24:10.958647      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-zp6gz\" servicePort=\"100.68.35.130:80/TCP\"\nI0623 07:24:10.958666      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-dhmt5\" servicePort=\"100.64.23.82:80/TCP\"\nI0623 07:24:10.958687      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-zc8qm\" servicePort=\"100.71.150.223:80/TCP\"\nI0623 07:24:10.958704      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-kbwpz\" servicePort=\"100.66.148.243:80/TCP\"\nI0623 07:24:10.958720      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-nd8wb\" servicePort=\"100.67.209.36:80/TCP\"\nI0623 07:24:10.958739      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-x55xp\" servicePort=\"100.68.159.222:80/TCP\"\nI0623 07:24:10.958764      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-dnwmp\" servicePort=\"100.69.84.21:80/TCP\"\nI0623 07:24:10.958786      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-qn9zn\" servicePort=\"100.69.183.76:80/TCP\"\nI0623 07:24:10.958799      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-bfvch\" servicePort=\"100.67.156.238:80/TCP\"\nI0623 07:24:10.958815      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-2r9mp\" servicePort=\"100.64.203.5:80/TCP\"\nI0623 07:24:10.958834      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-4mcp6\" servicePort=\"100.65.177.52:80/TCP\"\nI0623 07:24:10.958854      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-kxpxq\" servicePort=\"100.69.196.32:80/TCP\"\nI0623 07:24:10.958871      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-gk4dh\" servicePort=\"100.67.74.159:80/TCP\"\nI0623 07:24:10.958891      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-ksxbc\" servicePort=\"100.69.77.146:80/TCP\"\nI0623 07:24:10.958905      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-j4mqs\" servicePort=\"100.71.127.158:80/TCP\"\nI0623 07:24:10.959683      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:11.006543      11 proxier.go:1461] \"Reloading service iptables data\" numServices=146 numEndpoints=134 numFilterChains=4 numFilterRules=18 numNATChains=269 numNATRules=669\nI0623 07:24:11.025272      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"67.249352ms\"\nI0623 07:24:11.028768      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-jt4n9\" portCount=1\nI0623 07:24:11.064822      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-m4zm7\" portCount=1\nI0623 07:24:11.107889      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-7gml6\" portCount=1\nI0623 07:24:11.262289      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qmhr7\" portCount=1\nI0623 07:24:11.333805      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-pnsl4\" portCount=1\nI0623 07:24:11.376082      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-6r7gw\" portCount=1\nI0623 07:24:11.447625      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-j9lfd\" portCount=1\nI0623 07:24:11.479721      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-vj478\" portCount=1\nI0623 07:24:11.677352      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-ksmrc\" portCount=1\nI0623 07:24:11.691369      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-r95ns\" portCount=1\nI0623 07:24:11.694782      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qg5gd\" portCount=1\nI0623 07:24:11.708926      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-67qcn\" portCount=1\nI0623 07:24:11.727154      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-nnbzj\" portCount=1\nI0623 07:24:11.752389      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-crmhp\" portCount=1\nI0623 07:24:11.779327      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-rpqtw\" portCount=1\nI0623 07:24:11.795037      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-vhp2s\" portCount=1\nI0623 07:24:11.815480      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-f5wcf\" portCount=1\nI0623 07:24:11.839999      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-wkgsp\" portCount=1\nI0623 07:24:11.872574      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-46mk7\" portCount=1\nI0623 07:24:11.920980      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-4nsmh\" portCount=1\nI0623 07:24:11.966958      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-f5wcf\" servicePort=\"100.64.12.56:80/TCP\"\nI0623 07:24:11.966998      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-wkgsp\" servicePort=\"100.68.247.238:80/TCP\"\nI0623 07:24:11.967013      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-j9lfd\" servicePort=\"100.71.73.52:80/TCP\"\nI0623 07:24:11.967027      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-r95ns\" servicePort=\"100.71.34.4:80/TCP\"\nI0623 07:24:11.967041      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-67qcn\" servicePort=\"100.68.125.73:80/TCP\"\nI0623 07:24:11.967055      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-crmhp\" servicePort=\"100.65.76.85:80/TCP\"\nI0623 07:24:11.967069      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-rpqtw\" servicePort=\"100.70.102.46:80/TCP\"\nI0623 07:24:11.967084      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-46mk7\" servicePort=\"100.70.200.110:80/TCP\"\nI0623 07:24:11.967107      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-jt4n9\" servicePort=\"100.70.105.237:80/TCP\"\nI0623 07:24:11.967126      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-7gml6\" servicePort=\"100.64.177.70:80/TCP\"\nI0623 07:24:11.967164      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-ksmrc\" servicePort=\"100.69.251.41:80/TCP\"\nI0623 07:24:11.967185      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-qg5gd\" servicePort=\"100.71.167.213:80/TCP\"\nI0623 07:24:11.967200      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-nnbzj\" servicePort=\"100.70.165.63:80/TCP\"\nI0623 07:24:11.967214      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-vhp2s\" servicePort=\"100.66.17.99:80/TCP\"\nI0623 07:24:11.967228      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-qmhr7\" servicePort=\"100.64.26.158:80/TCP\"\nI0623 07:24:11.967245      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-6r7gw\" servicePort=\"100.65.145.168:80/TCP\"\nI0623 07:24:11.967265      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-vj478\" servicePort=\"100.68.172.126:80/TCP\"\nI0623 07:24:11.967286      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-4nsmh\" servicePort=\"100.68.93.251:80/TCP\"\nI0623 07:24:11.967309      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-m4zm7\" servicePort=\"100.67.206.67:80/TCP\"\nI0623 07:24:11.967333      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-pnsl4\" servicePort=\"100.71.124.241:80/TCP\"\nI0623 07:24:11.968179      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:12.015118      11 proxier.go:1461] \"Reloading service iptables data\" numServices=166 numEndpoints=155 numFilterChains=4 numFilterRules=17 numNATChains=311 numNATRules=774\nI0623 07:24:12.015692      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-jnwv5\" portCount=1\nI0623 07:24:12.036425      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.490465ms\"\nI0623 07:24:12.036951      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-drscn\" portCount=1\nI0623 07:24:12.096834      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-kmhfc\" portCount=1\nI0623 07:24:12.170628      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-p8rl5\" portCount=1\nI0623 07:24:12.200985      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-4xdh5\" portCount=1\nI0623 07:24:12.239010      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-bt4tm\" portCount=1\nI0623 07:24:12.273192      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-77n64\" portCount=1\nI0623 07:24:12.359988      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-hpv58\" portCount=1\nI0623 07:24:12.396103      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-zj75s\" portCount=1\nI0623 07:24:12.419241      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-7d2bb\" portCount=1\nI0623 07:24:12.471876      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-kwpn4\" portCount=1\nI0623 07:24:12.518726      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-s8gqn\" portCount=1\nI0623 07:24:12.606218      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-kgqrd\" portCount=1\nI0623 07:24:12.639283      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-6gczq\" portCount=1\nI0623 07:24:12.695227      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-xlfj7\" portCount=1\nI0623 07:24:12.724790      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-sptsx\" portCount=1\nI0623 07:24:12.777647      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qqc55\" portCount=1\nI0623 07:24:12.815851      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-7rmkf\" portCount=1\nI0623 07:24:12.870084      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-rk2sj\" portCount=1\nI0623 07:24:12.927309      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-mt7ng\" portCount=1\nI0623 07:24:12.959106      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-7d2bb\" servicePort=\"100.66.184.116:80/TCP\"\nI0623 07:24:12.959140      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-7rmkf\" servicePort=\"100.70.181.142:80/TCP\"\nI0623 07:24:12.959156      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-sptsx\" servicePort=\"100.68.150.192:80/TCP\"\nI0623 07:24:12.959170      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-qqc55\" servicePort=\"100.65.82.229:80/TCP\"\nI0623 07:24:12.959185      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-rk2sj\" servicePort=\"100.65.76.252:80/TCP\"\nI0623 07:24:12.959201      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-jnwv5\" servicePort=\"100.66.110.156:80/TCP\"\nI0623 07:24:12.959220      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-kmhfc\" servicePort=\"100.70.50.26:80/TCP\"\nI0623 07:24:12.959233      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-bt4tm\" servicePort=\"100.71.152.243:80/TCP\"\nI0623 07:24:12.959247      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-kwpn4\" servicePort=\"100.69.29.66:80/TCP\"\nI0623 07:24:12.959260      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-6gczq\" servicePort=\"100.70.62.164:80/TCP\"\nI0623 07:24:12.959275      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-mt7ng\" servicePort=\"100.67.214.200:80/TCP\"\nI0623 07:24:12.959294      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-kgqrd\" servicePort=\"100.71.88.82:80/TCP\"\nI0623 07:24:12.959307      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-xlfj7\" servicePort=\"100.68.1.60:80/TCP\"\nI0623 07:24:12.959321      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-drscn\" servicePort=\"100.68.95.53:80/TCP\"\nI0623 07:24:12.959335      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-p8rl5\" servicePort=\"100.68.61.245:80/TCP\"\nI0623 07:24:12.959348      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-77n64\" servicePort=\"100.68.55.229:80/TCP\"\nI0623 07:24:12.959365      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-hpv58\" servicePort=\"100.66.26.126:80/TCP\"\nI0623 07:24:12.959378      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-s8gqn\" servicePort=\"100.67.96.17:80/TCP\"\nI0623 07:24:12.959391      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-4xdh5\" servicePort=\"100.70.246.143:80/TCP\"\nI0623 07:24:12.959405      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-zj75s\" servicePort=\"100.64.2.33:80/TCP\"\nI0623 07:24:12.960260      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:12.993306      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-pgmvr\" portCount=1\nI0623 07:24:13.041122      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-985qv\" portCount=1\nI0623 07:24:13.063342      11 proxier.go:1461] \"Reloading service iptables data\" numServices=186 numEndpoints=175 numFilterChains=4 numFilterRules=17 numNATChains=351 numNATRules=874\nI0623 07:24:13.085042      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-chndn\" portCount=1\nI0623 07:24:13.087267      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"128.186019ms\"\nI0623 07:24:13.169771      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-9ddkd\" portCount=1\nI0623 07:24:13.217192      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-bptgw\" portCount=1\nI0623 07:24:13.266012      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-nm77b\" portCount=1\nI0623 07:24:13.305451      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-dgc5h\" portCount=1\nI0623 07:24:13.340129      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-t5549\" portCount=1\nI0623 07:24:13.388304      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-dfmp2\" portCount=1\nI0623 07:24:13.458127      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-w7c9r\" portCount=1\nI0623 07:24:13.481042      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-56rm7\" portCount=1\nI0623 07:24:13.543425      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-t7sjp\" portCount=1\nI0623 07:24:13.583283      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-5zzcr\" portCount=1\nI0623 07:24:13.624760      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-zc8ln\" portCount=1\nI0623 07:24:13.669185      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-9m2sb\" portCount=1\nI0623 07:24:13.715203      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-vk6fr\" portCount=1\nI0623 07:24:13.765377      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-gbglr\" portCount=1\nI0623 07:24:13.818992      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-rndfc\" portCount=1\nI0623 07:24:13.869502      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-l56n9\" portCount=1\nI0623 07:24:13.957289      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-t7sjp\" servicePort=\"100.71.162.138:80/TCP\"\nI0623 07:24:13.957333      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-zc8ln\" servicePort=\"100.68.182.39:80/TCP\"\nI0623 07:24:13.957352      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-9m2sb\" servicePort=\"100.64.180.148:80/TCP\"\nI0623 07:24:13.957369      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-985qv\" servicePort=\"100.71.88.97:80/TCP\"\nI0623 07:24:13.957384      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-chndn\" servicePort=\"100.65.161.137:80/TCP\"\nI0623 07:24:13.957400      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-vk6fr\" servicePort=\"100.65.148.179:80/TCP\"\nI0623 07:24:13.957415      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-gbglr\" servicePort=\"100.65.187.89:80/TCP\"\nI0623 07:24:13.957429      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-l56n9\" servicePort=\"100.68.6.248:80/TCP\"\nI0623 07:24:13.957444      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-dgc5h\" servicePort=\"100.67.211.214:80/TCP\"\nI0623 07:24:13.957459      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-t5549\" servicePort=\"100.64.169.112:80/TCP\"\nI0623 07:24:13.957473      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-rndfc\" servicePort=\"100.70.159.87:80/TCP\"\nI0623 07:24:13.957526      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-pgmvr\" servicePort=\"100.70.24.196:80/TCP\"\nI0623 07:24:13.957544      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-5zzcr\" servicePort=\"100.65.2.247:80/TCP\"\nI0623 07:24:13.957558      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-nm77b\" servicePort=\"100.70.123.242:80/TCP\"\nI0623 07:24:13.957576      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-dfmp2\" servicePort=\"100.65.164.27:80/TCP\"\nI0623 07:24:13.957590      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-w7c9r\" servicePort=\"100.64.224.89:80/TCP\"\nI0623 07:24:13.957604      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-56rm7\" servicePort=\"100.66.74.68:80/TCP\"\nI0623 07:24:13.957621      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-9ddkd\" servicePort=\"100.67.45.170:80/TCP\"\nI0623 07:24:13.957635      11 service.go:437] \"Adding new service port\" portName=\"svc-latency-7823/latency-svc-bptgw\" servicePort=\"100.64.104.111:80/TCP\"\nI0623 07:24:13.958535      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:14.034465      11 proxier.go:1461] \"Reloading service iptables data\" numServices=205 numEndpoints=195 numFilterChains=4 numFilterRules=16 numNATChains=391 numNATRules=974\nI0623 07:24:14.084221      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"126.953208ms\"\nI0623 07:24:15.085150      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:15.143209      11 proxier.go:1461] \"Reloading service iptables data\" numServices=205 numEndpoints=208 numFilterChains=4 numFilterRules=3 numNATChains=417 numNATRules=1039\nI0623 07:24:15.172165      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"87.739911ms\"\nI0623 07:24:18.820438      11 service.go:322] \"Service updated ports\" service=\"services-1285/clusterip-service\" portCount=1\nI0623 07:24:18.820506      11 service.go:437] \"Adding new service port\" portName=\"services-1285/clusterip-service\" servicePort=\"100.70.19.202:80/TCP\"\nI0623 07:24:18.821179      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:18.851734      11 service.go:322] \"Service updated ports\" service=\"services-1285/externalsvc\" portCount=1\nI0623 07:24:18.876250      11 proxier.go:1461] \"Reloading service iptables data\" numServices=206 numEndpoints=208 numFilterChains=4 numFilterRules=4 numNATChains=417 numNATRules=1039\nI0623 07:24:18.900760      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"80.258597ms\"\nI0623 07:24:18.900822      11 service.go:437] \"Adding new service port\" portName=\"services-1285/externalsvc\" servicePort=\"100.65.157.25:80/TCP\"\nI0623 07:24:18.901871      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:18.968460      11 proxier.go:1461] \"Reloading service iptables data\" numServices=207 numEndpoints=208 numFilterChains=4 numFilterRules=5 numNATChains=417 numNATRules=1039\nI0623 07:24:18.995511      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"94.700811ms\"\nI0623 07:24:19.836934      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:19.922637      11 proxier.go:1461] \"Reloading service iptables data\" numServices=207 numEndpoints=208 numFilterChains=4 numFilterRules=26 numNATChains=417 numNATRules=976\nI0623 07:24:19.971594      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"136.263699ms\"\nI0623 07:24:20.124339      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-25wdv\" portCount=0\nI0623 07:24:20.177842      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-26kvs\" portCount=0\nI0623 07:24:20.210189      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-275dj\" portCount=0\nI0623 07:24:20.235609      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-2jg5r\" portCount=0\nI0623 07:24:20.259990      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-2jk72\" portCount=0\nI0623 07:24:20.280300      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-2mnvr\" portCount=0\nI0623 07:24:20.309433      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-2r9mp\" portCount=0\nI0623 07:24:20.332222      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-2snrp\" portCount=0\nI0623 07:24:20.358482      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-42t8s\" portCount=0\nI0623 07:24:20.376127      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-46mk7\" portCount=0\nI0623 07:24:20.398390      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-47nd6\" portCount=0\nI0623 07:24:20.425334      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-4mcp6\" portCount=0\nI0623 07:24:20.447610      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-4nsmh\" portCount=0\nI0623 07:24:20.464794      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-4xdh5\" portCount=0\nI0623 07:24:20.549904      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-56rm7\" portCount=0\nI0623 07:24:20.579065      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-5htp9\" portCount=0\nI0623 07:24:20.614930      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-5nnw4\" portCount=0\nI0623 07:24:20.641682      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-5z4g8\" portCount=0\nI0623 07:24:20.663210      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-5zzcr\" portCount=0\nI0623 07:24:20.683478      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-62f7s\" portCount=0\nI0623 07:24:20.782089      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-66nnc\" portCount=0\nI0623 07:24:20.857131      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-4mcp6\"\nI0623 07:24:20.857325      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-4nsmh\"\nI0623 07:24:20.857378      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-62f7s\"\nI0623 07:24:20.857410      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-26kvs\"\nI0623 07:24:20.857443      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-2jk72\"\nI0623 07:24:20.857481      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-56rm7\"\nI0623 07:24:20.857531      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-5nnw4\"\nI0623 07:24:20.857564      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-5z4g8\"\nI0623 07:24:20.857598      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-5zzcr\"\nI0623 07:24:20.857642      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-66nnc\"\nI0623 07:24:20.857671      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-2jg5r\"\nI0623 07:24:20.857699      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-2mnvr\"\nI0623 07:24:20.857727      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-25wdv\"\nI0623 07:24:20.857759      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-2r9mp\"\nI0623 07:24:20.857789      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-2snrp\"\nI0623 07:24:20.857820      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-42t8s\"\nI0623 07:24:20.857855      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-46mk7\"\nI0623 07:24:20.857890      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-47nd6\"\nI0623 07:24:20.857923      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-4xdh5\"\nI0623 07:24:20.857955      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-5htp9\"\nI0623 07:24:20.857993      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-275dj\"\nI0623 07:24:20.858477      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-67n72\" portCount=0\nI0623 07:24:20.859345      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:20.913100      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-67qcn\" portCount=0\nI0623 07:24:20.915189      11 proxier.go:1461] \"Reloading service iptables data\" numServices=186 numEndpoints=187 numFilterChains=4 numFilterRules=49 numNATChains=375 numNATRules=802\nI0623 07:24:20.934373      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"77.262519ms\"\nI0623 07:24:20.960275      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-6bh5r\" portCount=0\nI0623 07:24:21.010143      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-6gczq\" portCount=0\nI0623 07:24:21.072526      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-6nps2\" portCount=0\nI0623 07:24:21.111651      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-6r7gw\" portCount=0\nI0623 07:24:21.136002      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-6wsr7\" portCount=0\nI0623 07:24:21.208365      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-77n64\" portCount=0\nI0623 07:24:21.230474      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-7d2bb\" portCount=0\nI0623 07:24:21.263501      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-7gml6\" portCount=0\nI0623 07:24:21.313863      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-7jj6m\" portCount=0\nI0623 07:24:21.340364      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-7nvl7\" portCount=0\nI0623 07:24:21.381641      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-7rmkf\" portCount=0\nI0623 07:24:21.412152      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-7w8gw\" portCount=0\nI0623 07:24:21.452908      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-85bvd\" portCount=0\nI0623 07:24:21.493890      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-85d2w\" portCount=0\nI0623 07:24:21.521463      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-8kgld\" portCount=0\nI0623 07:24:21.542762      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-8xrpb\" portCount=0\nI0623 07:24:21.563994      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-985qv\" portCount=0\nI0623 07:24:21.580108      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-9bb4l\" portCount=0\nI0623 07:24:21.649334      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-9bgqg\" portCount=0\nI0623 07:24:21.680863      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-9crws\" portCount=0\nI0623 07:24:21.725698      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-9ddkd\" portCount=0\nI0623 07:24:21.784417      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-9lxlw\" portCount=0\nI0623 07:24:21.832470      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-67n72\"\nI0623 07:24:21.832636      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-6wsr7\"\nI0623 07:24:21.832740      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-7gml6\"\nI0623 07:24:21.832790      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-985qv\"\nI0623 07:24:21.832829      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-9bb4l\"\nI0623 07:24:21.832884      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-9ddkd\"\nI0623 07:24:21.832906      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-7d2bb\"\nI0623 07:24:21.832918      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-7jj6m\"\nI0623 07:24:21.832935      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-7nvl7\"\nI0623 07:24:21.832946      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-7w8gw\"\nI0623 07:24:21.833105      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-8kgld\"\nI0623 07:24:21.833165      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-8xrpb\"\nI0623 07:24:21.833190      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-67qcn\"\nI0623 07:24:21.833204      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-6gczq\"\nI0623 07:24:21.833215      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-6nps2\"\nI0623 07:24:21.833251      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-6r7gw\"\nI0623 07:24:21.833264      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-9crws\"\nI0623 07:24:21.833276      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-9lxlw\"\nI0623 07:24:21.833288      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-6bh5r\"\nI0623 07:24:21.833301      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-77n64\"\nI0623 07:24:21.833318      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-7rmkf\"\nI0623 07:24:21.833338      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-85bvd\"\nI0623 07:24:21.833351      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-85d2w\"\nI0623 07:24:21.833363      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-9bgqg\"\nI0623 07:24:21.834345      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:21.847082      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-9m2sb\" portCount=0\nI0623 07:24:21.889343      11 proxier.go:1461] \"Reloading service iptables data\" numServices=162 numEndpoints=163 numFilterChains=4 numFilterRules=57 numNATChains=287 numNATRules=618\nI0623 07:24:21.907335      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"74.879418ms\"\nI0623 07:24:21.962361      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-b26jk\" portCount=0\nI0623 07:24:22.051903      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-bc2wb\" portCount=0\nI0623 07:24:22.095245      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-bf95d\" portCount=0\nI0623 07:24:22.121108      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-bfvch\" portCount=0\nI0623 07:24:22.168349      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-bnq8l\" portCount=0\nI0623 07:24:22.197424      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-bptgw\" portCount=0\nI0623 07:24:22.216788      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-bt4tm\" portCount=0\nI0623 07:24:22.237457      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-bxsd7\" portCount=0\nI0623 07:24:22.266933      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-c566z\" portCount=0\nI0623 07:24:22.291790      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-c6bvf\" portCount=0\nI0623 07:24:22.312143      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-c88wc\" portCount=0\nI0623 07:24:22.328435      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-chndn\" portCount=0\nI0623 07:24:22.365362      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-crmhp\" portCount=0\nI0623 07:24:22.390551      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-cvtt5\" portCount=0\nI0623 07:24:22.431124      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-d2cvc\" portCount=0\nI0623 07:24:22.475870      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-d8n4q\" portCount=0\nI0623 07:24:22.507537      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-d98xc\" portCount=0\nI0623 07:24:22.527390      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-dbbbv\" portCount=0\nI0623 07:24:22.551399      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-dcfbx\" portCount=0\nI0623 07:24:22.577178      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-dfj9l\" portCount=0\nI0623 07:24:22.596605      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-dfmp2\" portCount=0\nI0623 07:24:22.613193      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-dgc5h\" portCount=0\nI0623 07:24:22.632926      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-dhmt5\" portCount=0\nI0623 07:24:22.658731      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-dnwmp\" portCount=0\nI0623 07:24:22.677408      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-drscn\" portCount=0\nI0623 07:24:22.705395      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-dsd9f\" portCount=0\nI0623 07:24:22.730058      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-dvmns\" portCount=0\nI0623 07:24:22.754312      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-f552h\" portCount=0\nI0623 07:24:22.769211      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-f5wcf\" portCount=0\nI0623 07:24:22.782274      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-fdqvv\" portCount=0\nI0623 07:24:22.803724      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-fp25n\" portCount=0\nI0623 07:24:22.820434      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-frxbb\" portCount=0\nI0623 07:24:22.839048      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-fxj49\" portCount=0\nI0623 07:24:22.839229      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-dbbbv\"\nI0623 07:24:22.839352      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-fxj49\"\nI0623 07:24:22.839369      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-bfvch\"\nI0623 07:24:22.839380      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-cvtt5\"\nI0623 07:24:22.839394      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-d2cvc\"\nI0623 07:24:22.839432      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-dgc5h\"\nI0623 07:24:22.839445      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-bxsd7\"\nI0623 07:24:22.839456      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-c88wc\"\nI0623 07:24:22.839485      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-crmhp\"\nI0623 07:24:22.839525      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-dvmns\"\nI0623 07:24:22.839537      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-fdqvv\"\nI0623 07:24:22.839548      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-bf95d\"\nI0623 07:24:22.839558      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-bt4tm\"\nI0623 07:24:22.839569      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-dnwmp\"\nI0623 07:24:22.839625      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-9m2sb\"\nI0623 07:24:22.839638      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-dfmp2\"\nI0623 07:24:22.839649      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-dsd9f\"\nI0623 07:24:22.839661      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-d98xc\"\nI0623 07:24:22.839713      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-dfj9l\"\nI0623 07:24:22.839730      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-frxbb\"\nI0623 07:24:22.839741      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-c6bvf\"\nI0623 07:24:22.839752      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-d8n4q\"\nI0623 07:24:22.839804      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-bptgw\"\nI0623 07:24:22.839819      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-dcfbx\"\nI0623 07:24:22.839830      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-drscn\"\nI0623 07:24:22.839841      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-f5wcf\"\nI0623 07:24:22.839891      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-fp25n\"\nI0623 07:24:22.839905      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-b26jk\"\nI0623 07:24:22.839928      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-bnq8l\"\nI0623 07:24:22.839986      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-chndn\"\nI0623 07:24:22.839999      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-dhmt5\"\nI0623 07:24:22.840008      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-f552h\"\nI0623 07:24:22.840025      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-bc2wb\"\nI0623 07:24:22.840035      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-c566z\"\nI0623 07:24:22.841110      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:22.870755      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-gbglr\" portCount=0\nI0623 07:24:22.886751      11 proxier.go:1461] \"Reloading service iptables data\" numServices=128 numEndpoints=129 numFilterChains=4 numFilterRules=62 numNATChains=223 numNATRules=437\nI0623 07:24:22.899574      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.350254ms\"\nI0623 07:24:22.899748      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-gk4dh\" portCount=0\nI0623 07:24:22.933035      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-gv9pv\" portCount=0\nI0623 07:24:22.965655      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-gzczv\" portCount=0\nI0623 07:24:22.982885      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-h57lq\" portCount=0\nI0623 07:24:23.008633      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-h9zks\" portCount=0\nI0623 07:24:23.031209      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-hb9jz\" portCount=0\nI0623 07:24:23.392111      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-hcp2d\" portCount=0\nI0623 07:24:23.456631      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-hd28l\" portCount=0\nI0623 07:24:23.481929      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-hhx5b\" portCount=0\nI0623 07:24:23.508273      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-hpv58\" portCount=0\nI0623 07:24:23.529165      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-j4mqs\" portCount=0\nI0623 07:24:23.561353      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-j9f9d\" portCount=0\nI0623 07:24:23.576763      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-j9lfd\" portCount=0\nI0623 07:24:23.602071      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-jlv9h\" portCount=0\nI0623 07:24:23.620278      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-jnwv5\" portCount=0\nI0623 07:24:23.643714      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-jpmpq\" portCount=0\nI0623 07:24:23.695855      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-jr6q9\" portCount=0\nI0623 07:24:23.724333      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-jsq82\" portCount=0\nI0623 07:24:23.767663      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-jt4n9\" portCount=0\nI0623 07:24:23.798326      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-k5d9m\" portCount=0\nI0623 07:24:23.847347      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-k8sp9\" portCount=0\nI0623 07:24:23.847399      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-jsq82\"\nI0623 07:24:23.847436      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-jt4n9\"\nI0623 07:24:23.847448      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-k5d9m\"\nI0623 07:24:23.847457      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-hpv58\"\nI0623 07:24:23.847467      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-j9f9d\"\nI0623 07:24:23.847479      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-gbglr\"\nI0623 07:24:23.847511      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-jnwv5\"\nI0623 07:24:23.847563      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-jpmpq\"\nI0623 07:24:23.847600      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-k8sp9\"\nI0623 07:24:23.847646      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-h57lq\"\nI0623 07:24:23.847666      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-hcp2d\"\nI0623 07:24:23.847677      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-gzczv\"\nI0623 07:24:23.847687      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-h9zks\"\nI0623 07:24:23.847702      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-hb9jz\"\nI0623 07:24:23.847755      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-hd28l\"\nI0623 07:24:23.847771      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-hhx5b\"\nI0623 07:24:23.847785      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-j4mqs\"\nI0623 07:24:23.847795      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-gk4dh\"\nI0623 07:24:23.847806      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-gv9pv\"\nI0623 07:24:23.847873      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-jr6q9\"\nI0623 07:24:23.847891      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-j9lfd\"\nI0623 07:24:23.847901      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-jlv9h\"\nI0623 07:24:23.848865      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:23.891769      11 proxier.go:1461] \"Reloading service iptables data\" numServices=106 numEndpoints=107 numFilterChains=4 numFilterRules=66 numNATChains=145 numNATRules=281\nI0623 07:24:23.902314      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.912852ms\"\nI0623 07:24:23.907018      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-kbwpz\" portCount=0\nI0623 07:24:23.981181      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-kgqrd\" portCount=0\nI0623 07:24:24.062052      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-kmhfc\" portCount=0\nI0623 07:24:24.106206      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-kpp6n\" portCount=0\nI0623 07:24:24.154338      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-ksmrc\" portCount=0\nI0623 07:24:24.189741      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-ksxbc\" portCount=0\nI0623 07:24:24.225631      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-kwpn4\" portCount=0\nI0623 07:24:24.253838      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-kxpxq\" portCount=0\nI0623 07:24:24.273868      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-l2vcp\" portCount=0\nI0623 07:24:24.302081      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-l4mbt\" portCount=0\nI0623 07:24:24.376244      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-l56n9\" portCount=0\nI0623 07:24:24.410415      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-l5kqt\" portCount=0\nI0623 07:24:24.445226      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-l7m9q\" portCount=0\nI0623 07:24:24.479013      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-l852p\" portCount=0\nI0623 07:24:24.510932      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-m4zm7\" portCount=0\nI0623 07:24:24.563362      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-md6th\" portCount=0\nI0623 07:24:24.591798      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-ml87m\" portCount=0\nI0623 07:24:24.626899      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-mrpj7\" portCount=0\nI0623 07:24:24.659580      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-mt7ng\" portCount=0\nI0623 07:24:24.683662      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-mtcl5\" portCount=0\nI0623 07:24:24.711586      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-mx4m9\" portCount=0\nI0623 07:24:24.743102      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-ncbxk\" portCount=0\nI0623 07:24:24.778965      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-nd8wb\" portCount=0\nI0623 07:24:24.803083      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-nlp6h\" portCount=0\nI0623 07:24:24.824496      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-m4zm7\"\nI0623 07:24:24.824534      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-ml87m\"\nI0623 07:24:24.824545      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-nd8wb\"\nI0623 07:24:24.824556      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-l7m9q\"\nI0623 07:24:24.824565      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-l852p\"\nI0623 07:24:24.824845      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-md6th\"\nI0623 07:24:24.824873      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-mrpj7\"\nI0623 07:24:24.825014      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-mx4m9\"\nI0623 07:24:24.825134      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-kbwpz\"\nI0623 07:24:24.825161      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-kgqrd\"\nI0623 07:24:24.825229      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-l2vcp\"\nI0623 07:24:24.825276      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-nlp6h\"\nI0623 07:24:24.825328      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-kwpn4\"\nI0623 07:24:24.825372      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-kxpxq\"\nI0623 07:24:24.825431      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-l4mbt\"\nI0623 07:24:24.825659      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-ncbxk\"\nI0623 07:24:24.825680      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-kmhfc\"\nI0623 07:24:24.825691      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-ksmrc\"\nI0623 07:24:24.825701      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-ksxbc\"\nI0623 07:24:24.825711      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-mt7ng\"\nI0623 07:24:24.825721      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-mtcl5\"\nI0623 07:24:24.825730      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-kpp6n\"\nI0623 07:24:24.825740      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-l56n9\"\nI0623 07:24:24.825748      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-l5kqt\"\nI0623 07:24:24.826696      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:24.829962      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-nm77b\" portCount=0\nI0623 07:24:24.865492      11 proxier.go:1461] \"Reloading service iptables data\" numServices=82 numEndpoints=83 numFilterChains=4 numFilterRules=67 numNATChains=93 numNATRules=154\nI0623 07:24:24.872398      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-nnbzj\" portCount=0\nI0623 07:24:24.873936      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.512544ms\"\nI0623 07:24:24.923294      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-nsmjf\" portCount=0\nI0623 07:24:24.972256      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-nt752\" portCount=0\nI0623 07:24:25.020553      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-p8rl5\" portCount=0\nI0623 07:24:25.077895      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-pbfrm\" portCount=0\nI0623 07:24:25.106468      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-pgmvr\" portCount=0\nI0623 07:24:25.161715      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-pns55\" portCount=0\nI0623 07:24:25.201404      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-pnsl4\" portCount=0\nI0623 07:24:25.241610      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-ptxkt\" portCount=0\nI0623 07:24:25.256688      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-q2jj7\" portCount=0\nI0623 07:24:25.284000      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qcvjb\" portCount=0\nI0623 07:24:25.308474      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qg5gd\" portCount=0\nI0623 07:24:25.321979      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qkfmg\" portCount=0\nI0623 07:24:25.335190      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qlpg8\" portCount=0\nI0623 07:24:25.373031      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qmhr7\" portCount=0\nI0623 07:24:25.404651      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qn9zn\" portCount=0\nI0623 07:24:25.422499      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qnmxj\" portCount=0\nI0623 07:24:25.464536      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qqc55\" portCount=0\nI0623 07:24:25.515763      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-qxmjv\" portCount=0\nI0623 07:24:25.546186      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-r46cd\" portCount=0\nI0623 07:24:25.561593      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-r6c6m\" portCount=0\nI0623 07:24:25.581235      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-r86vg\" portCount=0\nI0623 07:24:25.613320      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-r95ns\" portCount=0\nI0623 07:24:25.640737      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-rb269\" portCount=0\nI0623 07:24:25.661590      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-rk2sj\" portCount=0\nI0623 07:24:25.724176      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-rndfc\" portCount=0\nI0623 07:24:25.748330      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-rpqtw\" portCount=0\nI0623 07:24:25.772382      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-rtkvt\" portCount=0\nI0623 07:24:25.792143      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-s2qpx\" portCount=0\nI0623 07:24:25.806953      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-s8gqn\" portCount=0\nI0623 07:24:25.826476      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-nm77b\"\nI0623 07:24:25.826509      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-q2jj7\"\nI0623 07:24:25.826554      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-qcvjb\"\nI0623 07:24:25.826566      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-r6c6m\"\nI0623 07:24:25.826587      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-rpqtw\"\nI0623 07:24:25.826634      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-s2qpx\"\nI0623 07:24:25.826663      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-p8rl5\"\nI0623 07:24:25.826677      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-qmhr7\"\nI0623 07:24:25.826728      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-qnmxj\"\nI0623 07:24:25.826759      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-pbfrm\"\nI0623 07:24:25.826772      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-r46cd\"\nI0623 07:24:25.826788      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-r86vg\"\nI0623 07:24:25.826838      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-pgmvr\"\nI0623 07:24:25.826854      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-pnsl4\"\nI0623 07:24:25.826866      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-rk2sj\"\nI0623 07:24:25.826907      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-rtkvt\"\nI0623 07:24:25.826926      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-s8gqn\"\nI0623 07:24:25.826937      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-pns55\"\nI0623 07:24:25.826947      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-qkfmg\"\nI0623 07:24:25.826958      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-rndfc\"\nI0623 07:24:25.826998      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-nsmjf\"\nI0623 07:24:25.827015      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-nt752\"\nI0623 07:24:25.827038      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-qn9zn\"\nI0623 07:24:25.827090      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-rb269\"\nI0623 07:24:25.827118      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-nnbzj\"\nI0623 07:24:25.827132      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-qg5gd\"\nI0623 07:24:25.827147      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-qlpg8\"\nI0623 07:24:25.827201      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-qqc55\"\nI0623 07:24:25.827222      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-qxmjv\"\nI0623 07:24:25.827234      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-ptxkt\"\nI0623 07:24:25.827278      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-r95ns\"\nI0623 07:24:25.828026      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:25.831959      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-sb8lr\" portCount=0\nI0623 07:24:25.865668      11 proxier.go:1461] \"Reloading service iptables data\" numServices=51 numEndpoints=45 numFilterChains=4 numFilterRules=49 numNATChains=45 numNATRules=67\nI0623 07:24:25.871384      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-sf2xq\" portCount=0\nI0623 07:24:25.873365      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.929854ms\"\nI0623 07:24:25.915435      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-sptsx\" portCount=0\nI0623 07:24:25.959725      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-t2hfg\" portCount=0\nI0623 07:24:25.996451      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-t5549\" portCount=0\nI0623 07:24:26.020151      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-t7sjp\" portCount=0\nI0623 07:24:26.047464      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-t9mmr\" portCount=0\nI0623 07:24:26.066340      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-tkvm5\" portCount=0\nI0623 07:24:26.110461      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-trzpt\" portCount=0\nI0623 07:24:26.130830      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-txbtv\" portCount=0\nI0623 07:24:26.172215      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-v4xsc\" portCount=0\nI0623 07:24:26.185974      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-v8js5\" portCount=0\nI0623 07:24:26.214362      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-v8lw7\" portCount=0\nI0623 07:24:26.233371      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-v94tc\" portCount=0\nI0623 07:24:26.263273      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-vbvxk\" portCount=0\nI0623 07:24:26.281069      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-vchm8\" portCount=0\nI0623 07:24:26.300604      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-vhp2s\" portCount=0\nI0623 07:24:26.315044      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-vj478\" portCount=0\nI0623 07:24:26.338932      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-vk6fr\" portCount=0\nI0623 07:24:26.382043      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-vkqzr\" portCount=0\nI0623 07:24:26.413556      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-vrmxg\" portCount=0\nI0623 07:24:26.460993      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-w7c9r\" portCount=0\nI0623 07:24:26.505040      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-wc99v\" portCount=0\nI0623 07:24:26.521322      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-wd7nm\" portCount=0\nI0623 07:24:26.536875      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-wgv5f\" portCount=0\nI0623 07:24:26.569802      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-wkgsp\" portCount=0\nI0623 07:24:26.592130      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-wr55s\" portCount=0\nI0623 07:24:26.613088      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-wzk5w\" portCount=0\nI0623 07:24:26.637125      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-x2zvn\" portCount=0\nI0623 07:24:26.664144      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-x4hbj\" portCount=0\nI0623 07:24:26.703579      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-x55xp\" portCount=0\nI0623 07:24:26.729641      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-x56l2\" portCount=0\nI0623 07:24:26.761067      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-x9v52\" portCount=0\nI0623 07:24:26.832727      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-xbsbg\" portCount=0\nI0623 07:24:26.832916      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-vkqzr\"\nI0623 07:24:26.832986      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-v94tc\"\nI0623 07:24:26.833042      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-vbvxk\"\nI0623 07:24:26.833079      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-xbsbg\"\nI0623 07:24:26.833110      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-vk6fr\"\nI0623 07:24:26.833143      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-x9v52\"\nI0623 07:24:26.833175      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-trzpt\"\nI0623 07:24:26.833221      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-v4xsc\"\nI0623 07:24:26.833259      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-wkgsp\"\nI0623 07:24:26.833328      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-sb8lr\"\nI0623 07:24:26.833381      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-t5549\"\nI0623 07:24:26.833416      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-v8lw7\"\nI0623 07:24:26.833453      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-vhp2s\"\nI0623 07:24:26.833498      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-wr55s\"\nI0623 07:24:26.833536      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-sf2xq\"\nI0623 07:24:26.833583      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-t2hfg\"\nI0623 07:24:26.833638      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-vrmxg\"\nI0623 07:24:26.833693      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-t9mmr\"\nI0623 07:24:26.833734      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-vchm8\"\nI0623 07:24:26.833776      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-tkvm5\"\nI0623 07:24:26.833808      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-wgv5f\"\nI0623 07:24:26.833838      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-wzk5w\"\nI0623 07:24:26.833866      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-sptsx\"\nI0623 07:24:26.833907      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-t7sjp\"\nI0623 07:24:26.833950      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-wc99v\"\nI0623 07:24:26.833981      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-wd7nm\"\nI0623 07:24:26.834014      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-x2zvn\"\nI0623 07:24:26.834089      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-x4hbj\"\nI0623 07:24:26.834124      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-txbtv\"\nI0623 07:24:26.834163      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-vj478\"\nI0623 07:24:26.834197      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-x55xp\"\nI0623 07:24:26.834227      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-x56l2\"\nI0623 07:24:26.834259      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-v8js5\"\nI0623 07:24:26.834291      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-w7c9r\"\nI0623 07:24:26.834833      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:26.873619      11 proxier.go:1461] \"Reloading service iptables data\" numServices=17 numEndpoints=12 numFilterChains=4 numFilterRules=15 numNATChains=17 numNATRules=39\nI0623 07:24:26.879988      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.074997ms\"\nI0623 07:24:26.975002      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-xjxd9\" portCount=0\nI0623 07:24:27.109515      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-xlfj7\" portCount=0\nI0623 07:24:27.169390      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-xmpq2\" portCount=0\nI0623 07:24:27.218459      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-xrstx\" portCount=0\nI0623 07:24:27.314442      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-zc8ln\" portCount=0\nI0623 07:24:27.400505      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-zc8qm\" portCount=0\nI0623 07:24:27.529830      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-zfrvn\" portCount=0\nI0623 07:24:27.621809      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-zh4j6\" portCount=0\nI0623 07:24:27.723104      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-zj75s\" portCount=0\nI0623 07:24:27.805617      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-zp6gz\" portCount=0\nI0623 07:24:27.831003      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-zc8qm\"\nI0623 07:24:27.831033      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-zfrvn\"\nI0623 07:24:27.831049      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-zh4j6\"\nI0623 07:24:27.831060      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-xrstx\"\nI0623 07:24:27.831072      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-zc8ln\"\nI0623 07:24:27.831084      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-zj75s\"\nI0623 07:24:27.831093      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-zp6gz\"\nI0623 07:24:27.831108      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-xjxd9\"\nI0623 07:24:27.831119      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-xlfj7\"\nI0623 07:24:27.831133      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-xmpq2\"\nI0623 07:24:27.831564      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:27.864551      11 service.go:322] \"Service updated ports\" service=\"svc-latency-7823/latency-svc-zph8w\" portCount=0\nI0623 07:24:27.916475      11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=18 numNATRules=42\nI0623 07:24:27.924428      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"93.434541ms\"\nI0623 07:24:27.984170      11 service.go:322] \"Service updated ports\" service=\"services-1285/clusterip-service\" portCount=0\nI0623 07:24:28.925049      11 service.go:462] \"Removing service port\" portName=\"svc-latency-7823/latency-svc-zph8w\"\nI0623 07:24:28.925090      11 service.go:462] \"Removing service port\" portName=\"services-1285/clusterip-service\"\nI0623 07:24:28.925415      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:28.973494      11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=18 numNATRules=42\nI0623 07:24:28.980244      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.218175ms\"\nI0623 07:24:29.455337      11 service.go:322] \"Service updated ports\" service=\"conntrack-2836/boom-server\" portCount=1\nI0623 07:24:29.980975      11 service.go:437] \"Adding new service port\" portName=\"conntrack-2836/boom-server\" servicePort=\"100.71.23.29:9000/TCP\"\nI0623 07:24:29.981408      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:30.077390      11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=47\nI0623 07:24:30.084245      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"103.308976ms\"\nI0623 07:24:32.363049      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:32.401261      11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=42\nI0623 07:24:32.407109      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.208527ms\"\nI0623 07:24:32.999363      11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ing4hkgn\"\nI0623 07:24:33.006753      11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingvbc8h\"\nI0623 07:24:33.016313      11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingjvb62\"\nI0623 07:24:33.063406      11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingjvb62\"\nI0623 07:24:33.076755      11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingjvb62\"\nI0623 07:24:33.082177      11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingjvb62\"\nI0623 07:24:33.098778      11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ing4hkgn\"\nI0623 07:24:33.103975      11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingvbc8h\"\nI0623 07:24:39.387675      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:39.449338      11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0623 07:24:39.456281      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.733269ms\"\nI0623 07:24:39.691431      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:39.753766      11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0623 07:24:39.762683      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.373504ms\"\nI0623 07:24:42.127509      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:42.162706      11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0623 07:24:42.167999      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.613038ms\"\nI0623 07:24:42.507431      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:42.533840      11 service.go:322] \"Service updated ports\" service=\"services-1285/externalsvc\" portCount=0\nI0623 07:24:42.542193      11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0623 07:24:42.546966      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.639594ms\"\nI0623 07:24:43.547200      11 service.go:462] \"Removing service port\" portName=\"services-1285/externalsvc\"\nI0623 07:24:43.547342      11 proxier.go:853] \"Syncing iptables rules\"\nI0623 07:24:43.585626      11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0623 07:24:43.591314      11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.164813ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-nodes-us-central1-a-tdxw ====\n==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-master-us-central1-a-587c ====\n2022/06/23 07:08:43 Running command:\nCommand env: (log-file=/var/log/kube-scheduler.log, also-stdout=true, redirect-stderr=true)\nRun from directory: \nExecutable path: /usr/local/bin/kube-scheduler\nArgs (comma-delimited): /usr/local/bin/kube-scheduler,--authentication-kubeconfig=/var/lib/kube-scheduler/kubeconfig,--authorization-kubeconfig=/var/lib/kube-scheduler/kubeconfig,--config=/var/lib/kube-scheduler/config.yaml,--leader-elect=true,--tls-cert-file=/srv/kubernetes/kube-scheduler/server.crt,--tls-private-key-file=/srv/kubernetes/kube-scheduler/server.key,--v=2\n2022/06/23 07:08:43 Now listening for interrupts\nI0623 07:08:43.721069      10 flags.go:64] FLAG: --add-dir-header=\"false\"\nI0623 07:08:43.721304      10 flags.go:64] FLAG: --allow-metric-labels=\"[]\"\nI0623 07:08:43.721378      10 flags.go:64] FLAG: --alsologtostderr=\"false\"\nI0623 07:08:43.721454      10 flags.go:64] FLAG: --authentication-kubeconfig=\"/var/lib/kube-scheduler/kubeconfig\"\nI0623 07:08:43.721495      10 flags.go:64] FLAG: --authentication-skip-lookup=\"false\"\nI0623 07:08:43.721548      10 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl=\"10s\"\nI0623 07:08:43.721596      10 flags.go:64] FLAG: --authentication-tolerate-lookup-failure=\"true\"\nI0623 07:08:43.721638      10 flags.go:64] FLAG: --authorization-always-allow-paths=\"[/healthz,/readyz,/livez]\"\nI0623 07:08:43.721705      10 flags.go:64] FLAG: --authorization-kubeconfig=\"/var/lib/kube-scheduler/kubeconfig\"\nI0623 07:08:43.721754      10 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl=\"10s\"\nI0623 07:08:43.721791      10 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"10s\"\nI0623 07:08:43.721843      10 flags.go:64] FLAG: --bind-address=\"0.0.0.0\"\nI0623 07:08:43.721894      10 flags.go:64] FLAG: --cert-dir=\"\"\nI0623 07:08:43.721925      10 flags.go:64] FLAG: --client-ca-file=\"\"\nI0623 07:08:43.721952      10 flags.go:64] FLAG: --config=\"/var/lib/kube-scheduler/config.yaml\"\nI0623 07:08:43.722036      10 flags.go:64] FLAG: --contention-profiling=\"true\"\nI0623 07:08:43.722069      10 flags.go:64] FLAG: --disabled-metrics=\"[]\"\nI0623 07:08:43.722122      10 flags.go:64] FLAG: --feature-gates=\"\"\nI0623 07:08:43.722174      10 flags.go:64] FLAG: --help=\"false\"\nI0623 07:08:43.722201      10 flags.go:64] FLAG: --http2-max-streams-per-connection=\"0\"\nI0623 07:08:43.722228      10 flags.go:64] FLAG: --kube-api-burst=\"100\"\nI0623 07:08:43.722281      10 flags.go:64] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0623 07:08:43.722335      10 flags.go:64] FLAG: --kube-api-qps=\"50\"\nI0623 07:08:43.722374      10 flags.go:64] FLAG: --kubeconfig=\"\"\nI0623 07:08:43.722494      10 flags.go:64] FLAG: --leader-elect=\"true\"\nI0623 07:08:43.722527      10 flags.go:64] FLAG: --leader-elect-lease-duration=\"15s\"\nI0623 07:08:43.722593      10 flags.go:64] FLAG: --leader-elect-renew-deadline=\"10s\"\nI0623 07:08:43.722642      10 flags.go:64] FLAG: --leader-elect-resource-lock=\"leases\"\nI0623 07:08:43.722672      10 flags.go:64] FLAG: --leader-elect-resource-name=\"kube-scheduler\"\nI0623 07:08:43.722719      10 flags.go:64] FLAG: --leader-elect-resource-namespace=\"kube-system\"\nI0623 07:08:43.722765      10 flags.go:64] FLAG: --leader-elect-retry-period=\"2s\"\nI0623 07:08:43.722795      10 flags.go:64] FLAG: --lock-object-name=\"kube-scheduler\"\nI0623 07:08:43.722822      10 flags.go:64] FLAG: --lock-object-namespace=\"kube-system\"\nI0623 07:08:43.722879      10 flags.go:64] FLAG: --log-backtrace-at=\":0\"\nI0623 07:08:43.722914      10 flags.go:64] FLAG: --log-dir=\"\"\nI0623 07:08:43.722959      10 flags.go:64] FLAG: --log-file=\"\"\nI0623 07:08:43.723001      10 flags.go:64] FLAG: --log-file-max-size=\"1800\"\nI0623 07:08:43.723033      10 flags.go:64] FLAG: --log-flush-frequency=\"5s\"\nI0623 07:08:43.723080      10 flags.go:64] FLAG: --log-json-info-buffer-size=\"0\"\nI0623 07:08:43.723132      10 flags.go:64] FLAG: --log-json-split-stream=\"false\"\nI0623 07:08:43.723167      10 flags.go:64] FLAG: --logging-format=\"text\"\nI0623 07:08:43.723214      10 flags.go:64] FLAG: --logtostderr=\"true\"\nI0623 07:08:43.723258      10 flags.go:64] FLAG: --master=\"\"\nI0623 07:08:43.723287      10 flags.go:64] FLAG: --one-output=\"false\"\nI0623 07:08:43.723314      10 flags.go:64] FLAG: --permit-address-sharing=\"false\"\nI0623 07:08:43.723365      10 flags.go:64] FLAG: --permit-port-sharing=\"false\"\nI0623 07:08:43.723421      10 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration=\"5m0s\"\nI0623 07:08:43.723451      10 flags.go:64] FLAG: --profiling=\"true\"\nI0623 07:08:43.723479      10 flags.go:64] FLAG: --requestheader-allowed-names=\"[]\"\nI0623 07:08:43.723548      10 flags.go:64] FLAG: --requestheader-client-ca-file=\"\"\nI0623 07:08:43.723578      10 flags.go:64] FLAG: --requestheader-extra-headers-prefix=\"[x-remote-extra-]\"\nI0623 07:08:43.723666      10 flags.go:64] FLAG: --requestheader-group-headers=\"[x-remote-group]\"\nI0623 07:08:43.723713      10 flags.go:64] FLAG: --requestheader-username-headers=\"[x-remote-user]\"\nI0623 07:08:43.723745      10 flags.go:64] FLAG: --secure-port=\"10259\"\nI0623 07:08:43.723798      10 flags.go:64] FLAG: --show-hidden-metrics-for-version=\"\"\nI0623 07:08:43.723841      10 flags.go:64] FLAG: --skip-headers=\"false\"\nI0623 07:08:43.723872      10 flags.go:64] FLAG: --skip-log-headers=\"false\"\nI0623 07:08:43.723897      10 flags.go:64] FLAG: --stderrthreshold=\"2\"\nI0623 07:08:43.723944      10 flags.go:64] FLAG: --tls-cert-file=\"/srv/kubernetes/kube-scheduler/server.crt\"\nI0623 07:08:43.723989      10 flags.go:64] FLAG: --tls-cipher-suites=\"[]\"\nI0623 07:08:43.724021      10 flags.go:64] FLAG: --tls-min-version=\"\"\nI0623 07:08:43.724068      10 flags.go:64] FLAG: --tls-private-key-file=\"/srv/kubernetes/kube-scheduler/server.key\"\nI0623 07:08:43.724115      10 flags.go:64] FLAG: --tls-sni-cert-key=\"[]\"\nI0623 07:08:43.724150      10 flags.go:64] FLAG: --v=\"2\"\nI0623 07:08:43.724200      10 flags.go:64] FLAG: --version=\"false\"\nI0623 07:08:43.724246      10 flags.go:64] FLAG: --vmodule=\"\"\nI0623 07:08:43.724282      10 flags.go:64] FLAG: --write-config-to=\"\"\nI0623 07:08:43.726299      10 dynamic_serving_content.go:113] \"Loaded a new cert/key pair\" name=\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\"\nW0623 07:08:44.644529      10 authentication.go:346] Error looking up in-cluster authentication configuration: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.644578      10 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.\nW0623 07:08:44.644588      10 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false\nI0623 07:08:44.657167      10 configfile.go:96] \"Using component config\" config=<\n\tapiVersion: kubescheduler.config.k8s.io/v1beta2\n\tclientConnection:\n\t  acceptContentTypes: \"\"\n\t  burst: 100\n\t  contentType: application/vnd.kubernetes.protobuf\n\t  kubeconfig: /var/lib/kube-scheduler/kubeconfig\n\t  qps: 50\n\tenableContentionProfiling: true\n\tenableProfiling: true\n\thealthzBindAddress: \"\"\n\tkind: KubeSchedulerConfiguration\n\tleaderElection:\n\t  leaderElect: true\n\t  leaseDuration: 15s\n\t  renewDeadline: 10s\n\t  resourceLock: leases\n\t  resourceName: kube-scheduler\n\t  resourceNamespace: kube-system\n\t  retryPeriod: 2s\n\tmetricsBindAddress: \"\"\n\tparallelism: 16\n\tpercentageOfNodesToScore: 0\n\tpodInitialBackoffSeconds: 1\n\tpodMaxBackoffSeconds: 10\n\tprofiles:\n\t- pluginConfig:\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      kind: DefaultPreemptionArgs\n\t      minCandidateNodesAbsolute: 100\n\t      minCandidateNodesPercentage: 10\n\t    name: DefaultPreemption\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      hardPodAffinityWeight: 1\n\t      kind: InterPodAffinityArgs\n\t    name: InterPodAffinity\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      kind: NodeAffinityArgs\n\t    name: NodeAffinity\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      kind: NodeResourcesBalancedAllocationArgs\n\t      resources:\n\t      - name: cpu\n\t        weight: 1\n\t      - name: memory\n\t        weight: 1\n\t    name: NodeResourcesBalancedAllocation\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      kind: NodeResourcesFitArgs\n\t      scoringStrategy:\n\t        resources:\n\t        - name: cpu\n\t          weight: 1\n\t        - name: memory\n\t          weight: 1\n\t        type: LeastAllocated\n\t    name: NodeResourcesFit\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      defaultingType: System\n\t      kind: PodTopologySpreadArgs\n\t    name: PodTopologySpread\n\t  - args:\n\t      apiVersion: kubescheduler.config.k8s.io/v1beta2\n\t      bindTimeoutSeconds: 600\n\t      kind: VolumeBindingArgs\n\t    name: VolumeBinding\n\t  plugins:\n\t    bind:\n\t      enabled:\n\t      - name: DefaultBinder\n\t        weight: 0\n\t    filter:\n\t      enabled:\n\t      - name: NodeUnschedulable\n\t        weight: 0\n\t      - name: NodeName\n\t        weight: 0\n\t      - name: TaintToleration\n\t        weight: 0\n\t      - name: NodeAffinity\n\t        weight: 0\n\t      - name: NodePorts\n\t        weight: 0\n\t      - name: NodeResourcesFit\n\t        weight: 0\n\t      - name: VolumeRestrictions\n\t        weight: 0\n\t      - name: EBSLimits\n\t        weight: 0\n\t      - name: GCEPDLimits\n\t        weight: 0\n\t      - name: NodeVolumeLimits\n\t        weight: 0\n\t      - name: AzureDiskLimits\n\t        weight: 0\n\t      - name: VolumeBinding\n\t        weight: 0\n\t      - name: VolumeZone\n\t        weight: 0\n\t      - name: PodTopologySpread\n\t        weight: 0\n\t      - name: InterPodAffinity\n\t        weight: 0\n\t    multiPoint: {}\n\t    permit: {}\n\t    postBind: {}\n\t    postFilter:\n\t      enabled:\n\t      - name: DefaultPreemption\n\t        weight: 0\n\t    preBind:\n\t      enabled:\n\t      - name: VolumeBinding\n\t        weight: 0\n\t    preFilter:\n\t      enabled:\n\t      - name: NodeResourcesFit\n\t        weight: 0\n\t      - name: NodePorts\n\t        weight: 0\n\t      - name: VolumeRestrictions\n\t        weight: 0\n\t      - name: PodTopologySpread\n\t        weight: 0\n\t      - name: InterPodAffinity\n\t        weight: 0\n\t      - name: VolumeBinding\n\t        weight: 0\n\t      - name: NodeAffinity\n\t        weight: 0\n\t    preScore:\n\t      enabled:\n\t      - name: InterPodAffinity\n\t        weight: 0\n\t      - name: PodTopologySpread\n\t        weight: 0\n\t      - name: TaintToleration\n\t        weight: 0\n\t      - name: NodeAffinity\n\t        weight: 0\n\t    queueSort:\n\t      enabled:\n\t      - name: PrioritySort\n\t        weight: 0\n\t    reserve:\n\t      enabled:\n\t      - name: VolumeBinding\n\t        weight: 0\n\t    score:\n\t      enabled:\n\t      - name: NodeResourcesBalancedAllocation\n\t        weight: 1\n\t      - name: ImageLocality\n\t        weight: 1\n\t      - name: InterPodAffinity\n\t        weight: 1\n\t      - name: NodeResourcesFit\n\t        weight: 1\n\t      - name: NodeAffinity\n\t        weight: 1\n\t      - name: PodTopologySpread\n\t        weight: 2\n\t      - name: TaintToleration\n\t        weight: 1\n\t  schedulerName: default-scheduler\n >\nI0623 07:08:44.657249      10 server.go:147] \"Starting Kubernetes Scheduler\" version=\"v1.25.0-alpha.1\"\nI0623 07:08:44.657262      10 server.go:149] \"Golang settings\" GOGC=\"\" GOMAXPROCS=\"\" GOTRACEBACK=\"\"\nI0623 07:08:44.661136      10 configmap_cafile_content.go:202] \"Starting controller\" name=\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\"\nI0623 07:08:44.661249      10 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0623 07:08:44.661616      10 tlsconfig.go:200] \"Loaded serving cert\" certName=\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\" certDetail=\"\\\"kube-scheduler\\\" [serving] validServingFor=[kube-scheduler.kube-system.svc.cluster.local] issuer=\\\"kubernetes-ca\\\" (2022-06-21 07:07:20 +0000 UTC to 2023-10-12 05:07:20 +0000 UTC (now=2022-06-23 07:08:44.661562241 +0000 UTC))\"\nI0623 07:08:44.661878      10 named_certificates.go:53] \"Loaded SNI cert\" index=0 certName=\"self-signed loopback\" certDetail=\"\\\"apiserver-loopback-client@1655968124\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\"apiserver-loopback-client-ca@1655968124\\\" (2022-06-23 06:08:43 +0000 UTC to 2023-06-23 06:08:43 +0000 UTC (now=2022-06-23 07:08:44.661818272 +0000 UTC))\"\nI0623 07:08:44.661915      10 secure_serving.go:210] Serving securely on [::]:10259\nI0623 07:08:44.662284      10 dynamic_serving_content.go:132] \"Starting controller\" name=\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\"\nW0623 07:08:44.662359      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.662488      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.665918      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.665975      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.670183      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.670312      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.670492      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.670560      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.670680      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.670747      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.670866      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.670928      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.673696      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.673757      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.673772      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.673807      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nI0623 07:08:44.674306      10 tlsconfig.go:240] \"Starting DynamicServingCertificateController\"\nW0623 07:08:44.675266      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.675319      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.675429      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.675465      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.675562      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.675601      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.676857      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.676918      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.677019      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.677054      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.677159      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.677210      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:44.677718      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:44.677779      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:45.522295      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:45.522434      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:45.534088      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:45.534219      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:45.550114      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:45.550282      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:45.706413      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:45.706562      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:45.708009      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:45.708116      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:45.710753      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:45.710883      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:45.728600      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:45.728789      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:45.759546      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:45.759703      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:45.798709      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:45.798779      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:45.821403      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:45.821538      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:45.881627      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:45.881784      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:46.081248      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:46.081410      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:46.093727      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:46.093867      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:46.154742      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:46.154906      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:46.239220      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:46.239278      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:47.392823      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:47.392891      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:47.662780      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:47.662840      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:47.881522      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:47.881673      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:47.911328      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:47.911474      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:47.974586      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:47.975341      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:47.975234      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:47.975512      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:48.000309      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:48.000425      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:48.036136      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:48.036201      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:48.097196      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:48.097361      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:48.153339      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:48.153514      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:48.159353      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:48.159501      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:48.331141      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:48.331323      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:48.385292      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:48.385443      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:48.841591      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:48.841654      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:48.850608      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:48.850665      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:51.793000      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:51.793143      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:51.863237      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:51.863383      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:51.954880      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:51.955011      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:52.082603      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:52.082929      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:52.158033      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:52.158086      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:52.360075      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:52.360135      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:53.109306      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:53.109487      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:53.183905      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:53.184104      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:53.419184      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:53.419349      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:53.474028      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:53.474119      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:53.571215      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:53.571350      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:53.654713      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:53.654865      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:53.812816      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:53.812980      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:53.917053      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:53.917210      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:54.890054      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:54.890192      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:58.910088      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:58.910234      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:08:59.549808      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:08:59.550078      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:00.600727      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:00.600827      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:01.108617      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:01.108769      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:01.420257      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:01.420424      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:01.648008      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:01.648190      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:01.813011      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:01.813063      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:02.114509      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:02.114633      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:02.273447      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:02.273581      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:02.567087      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:02.567164      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:04.346431      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:04.346479      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:04.415229      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:04.415429      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:04.419033      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:04.419166      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:04.820990      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:04.821158      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:05.484788      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0623 07:09:05.484932      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0623 07:09:25.456159      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0623 07:09:25.456552      10 trace.go:205] Trace[1307542334]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 07:09:15.454) (total time: 10001ms):\nTrace[1307542334]: ---\"Objects listed\" error:Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout 10001ms (07:09:25.456)\nTrace[1307542334]: [10.001867983s] [10.001867983s] END\nE0623 07:09:25.456706      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nW0623 07:09:26.503992      10 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0623 07:09:26.504459      10 trace.go:205] Trace[1045251419]: \"Reflector ListAndWatch\" name:pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (23-Jun-2022 07:09:16.502) (total time: 10001ms):\nTrace[1045251419]: ---\"Objects listed\" error:Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout 10001ms (07:09:26.503)\nTrace[1045251419]: [10.001786661s] [10.001786661s] END\nE0623 07:09:26.504609      10 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nW0623 07:09:26.524072      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0623 07:09:26.524265      10 trace.go:205] Trace[574505656]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 07:09:16.522) (total time: 10001ms):\nTrace[574505656]: ---\"Objects listed\" error:Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout 10001ms (07:09:26.524)\nTrace[574505656]: [10.001672445s] [10.001672445s] END\nE0623 07:09:26.524382      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nW0623 07:09:27.415386      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0623 07:09:27.415627      10 trace.go:205] Trace[1224267026]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 07:09:17.414) (total time: 10001ms):\nTrace[1224267026]: ---\"Objects listed\" error:Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout 10001ms (07:09:27.415)\nTrace[1224267026]: [10.001221071s] [10.001221071s] END\nE0623 07:09:27.415723      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nW0623 07:09:28.105042      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0623 07:09:28.105264      10 trace.go:205] Trace[1849380314]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 07:09:18.103) (total time: 10001ms):\nTrace[1849380314]: ---\"Objects listed\" error:Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout 10001ms (07:09:28.104)\nTrace[1849380314]: [10.001242705s] [10.001242705s] END\nE0623 07:09:28.105339      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nW0623 07:09:31.520982      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46886->127.0.0.1:443: read: connection reset by peer\nE0623 07:09:31.521042      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46886->127.0.0.1:443: read: connection reset by peer\nW0623 07:09:31.523225      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46856->127.0.0.1:443: read: connection reset by peer\nI0623 07:09:31.524379      10 trace.go:205] Trace[764496500]: \"Reflector ListAndWatch\" name:vendor/k8s.io/client-go/informers/factory.go:134 (23-Jun-2022 07:09:20.755) (total time: 10768ms):\nTrace[764496500]: ---\"Objects listed\" error:Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46856->127.0.0.1:443: read: connection reset by peer 10767ms (07:09:31.523)\nTrace[764496500]: [10.768584961s] [10.768584961s] END\nW0623 07:09:31.523503      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46918->127.0.0.1:443: read: connection reset by peer\nW0623 07:09:31.523630      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46884->127.0.0.1:443: read: connection reset by peer\nW0623 07:09:31.523737      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46868->127.0.0.1:443: read: connection reset by peer\nW0623 07:09:31.523844      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46860->127.0.0.1:443: read: connection reset by peer\nW0623 07:09:31.523946      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46866->127.0.0.1:443: read: connection reset by peer\nW0623 07:09:31.524058      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46876->127.0.0.1:443: read: connection reset by peer\nW0623 07:09:31.524168      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46870->127.0.0.1:443: read: connection reset by peer\nW0623 07:09:31.524289      10 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46900->127.0.0.1:443: read: connection reset by peer\nE0623 07:09:31.529366      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46900->127.0.0.1:443: read: connection reset by peer\nE0623 07:09:31.529441      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46856->127.0.0.1:443: read: connection reset by peer\nE0623 07:09:31.529520      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46918->127.0.0.1:443: read: connection reset by peer\nE0623 07:09:31.529591      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46884->127.0.0.1:443: read: connection reset by peer\nE0623 07:09:31.529654      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46868->127.0.0.1:443: read: connection reset by peer\nE0623 07:09:31.529728      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46860->127.0.0.1:443: read: connection reset by peer\nE0623 07:09:31.529792      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46866->127.0.0.1:443: read: connection reset by peer\nE0623 07:09:31.529870      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46876->127.0.0.1:443: read: connection reset by peer\nE0623 07:09:31.529931      10 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46870->127.0.0.1:443: read: connection reset by peer\nI0623 07:09:53.062027      10 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0623 07:09:53.062614      10 tlsconfig.go:178] \"Loaded client CA\" index=0 certName=\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\" certDetail=\"\\\"kubernetes-ca\\\" [] issuer=\\\"<self>\\\" (2022-06-21 07:05:44 +0000 UTC to 2032-06-20 07:05:44 +0000 UTC (now=2022-06-23 07:09:53.062573619 +0000 UTC))\"\nI0623 07:09:53.062853      10 tlsconfig.go:200] \"Loaded serving cert\" certName=\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\" certDetail=\"\\\"kube-scheduler\\\" [serving] validServingFor=[kube-scheduler.kube-system.svc.cluster.local] issuer=\\\"kubernetes-ca\\\" (2022-06-21 07:07:20 +0000 UTC to 2023-10-12 05:07:20 +0000 UTC (now=2022-06-23 07:09:53.062821178 +0000 UTC))\"\nI0623 07:09:53.063025      10 named_certificates.go:53] \"Loaded SNI cert\" index=0 certName=\"self-signed loopback\" certDetail=\"\\\"apiserver-loopback-client@1655968124\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\"apiserver-loopback-client-ca@1655968124\\\" (2022-06-23 06:08:43 +0000 UTC to 2023-06-23 06:08:43 +0000 UTC (now=2022-06-23 07:09:53.062995248 +0000 UTC))\"\nI0623 07:10:15.606860      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"master-us-central1-a-587c\" zone=\"\"\nI0623 07:10:21.064444      10 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...\nI0623 07:10:21.075813      10 leaderelection.go:258] successfully acquired lease kube-system/kube-scheduler\nI0623 07:10:21.077285      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-5d4dbc7b59-786l5\" err=\"0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.\"\nI0623 07:10:21.095133      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/kops-controller-nsdnd\" node=\"master-us-central1-a-587c\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:21.108102      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/dns-controller-78bc9bdd66-n6xk8\" node=\"master-us-central1-a-587c\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:21.116319      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-dd657c749-zwb7q\" err=\"0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.\"\nI0623 07:10:21.151790      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/cloud-controller-manager-8jnwg\" node=\"master-us-central1-a-587c\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:21.184730      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-controller-9f559494d-4mzqk\" node=\"master-us-central1-a-587c\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:21.184748      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-node-7zhhb\" node=\"master-us-central1-a-587c\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:39.171271      10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"master-us-central1-a-587c\" zone=\"\"\nI0623 07:10:39.171807      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"master-us-central1-a-587c\" zone=\"us-central1:\\x00:us-central1-a\"\nI0623 07:10:40.285710      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/metadata-proxy-v0.12-kgcqd\" node=\"master-us-central1-a-587c\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:46.139899      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-tdxw\" zone=\"\"\nI0623 07:10:46.225873      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-node-wv5pv\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:46.484765      10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"nodes-us-central1-a-tdxw\" zone=\"\"\nI0623 07:10:46.485106      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-tdxw\" zone=\"us-central1:\\x00:us-central1-a\"\nI0623 07:10:47.310459      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-nk1s\" zone=\"\"\nI0623 07:10:47.365168      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-node-5wcqp\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:47.452268      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-m5w1\" zone=\"\"\nI0623 07:10:47.521895      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-node-bcs7b\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:47.695023      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/metadata-proxy-v0.12-lxn6n\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:47.838318      10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"nodes-us-central1-a-nk1s\" zone=\"\"\nI0623 07:10:47.838479      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-nk1s\" zone=\"us-central1:\\x00:us-central1-a\"\nI0623 07:10:48.222650      10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"nodes-us-central1-a-m5w1\" zone=\"\"\nI0623 07:10:48.222687      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-m5w1\" zone=\"us-central1:\\x00:us-central1-a\"\nI0623 07:10:48.802547      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-50vm\" zone=\"\"\nI0623 07:10:48.858565      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gce-pd-csi-driver/csi-gce-pd-node-m8g5h\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:49.264285      10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"nodes-us-central1-a-50vm\" zone=\"\"\nI0623 07:10:49.264340      10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"nodes-us-central1-a-50vm\" zone=\"us-central1:\\x00:us-central1-a\"\nI0623 07:10:49.647017      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/metadata-proxy-v0.12-q7vth\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:50.586492      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/metadata-proxy-v0.12-z8xdd\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:10:51.964375      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/metadata-proxy-v0.12-7tn48\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:11:00.767452      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/coredns-autoscaler-5d4dbc7b59-786l5\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:11:00.767757      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/coredns-dd657c749-zwb7q\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:11:04.040415      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kube-system/coredns-dd657c749-6225l\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:35.616477      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"port-forwarding-7506/pfpod\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:35.626788      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-279/pod-configmaps-9be25185-a99d-4628-aa15-8c80605c6eca\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:35.689417      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2171/hostexec-nodes-us-central1-a-tdxw-gw7w5\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:35.747024      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3276/hostexec-nodes-us-central1-a-tdxw-zcxgf\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:35.775716      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-5354/liveness-4a3bdd18-9e33-42cf-84a8-5ae5273f6777\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:35.791941      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-2396/downwardapi-volume-069e9a81-3fec-426e-8c27-f4c8a4cfb811\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:35.902427      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"endpointslice-6667/pod1\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:35.965583      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"endpointslice-6667/pod2\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:35.989145      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/service-proxy-disabled-kz9lv\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.046056      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/service-proxy-disabled-znd7x\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.062305      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/service-proxy-disabled-dc66b\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.258497      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-68c48f9ff9-nm95k\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.293265      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7756/hostexec-nodes-us-central1-a-nk1s-6v9w7\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:36.294249      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-68c48f9ff9-dvvsr\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.326617      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-68c48f9ff9-rq67p\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.388202      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-68c48f9ff9-2s8qg\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.400513      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-68c48f9ff9-8hvbr\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.400604      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-68c48f9ff9-jw2hx\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.439647      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4520/pod1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.471398      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-4133/hostexec-nodes-us-central1-a-tdxw-6nthr\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:36.542512      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"aggregator-1769/sample-apiserver-deployment-84c5d6865b-jkjdc\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.601739      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-75f58dbc9d-snhn7\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.602179      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3703/hostexec-nodes-us-central1-a-tdxw-ktbgn\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:36.602246      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-8638/security-context-68cd647c-de17-4b10-bbd7-53716458507c\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.663200      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-75f58dbc9d-rs8kh\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.712936      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/up-down-1-9wqj6\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.736493      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-5189/configmap-client\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.748937      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/up-down-1-cb2c5\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.749395      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/up-down-1-nvhxx\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.846804      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-75f58dbc9d-qq6mw\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:36.887007      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-6858/alpine-nnp-nil-87c3b6bb-ea72-4954-9360-5ea60843dd35\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:37.057214      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-75f58dbc9d-zk2wb\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:37.081915      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-75f58dbc9d-7kmnk\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:37.086743      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-68c48f9ff9-vbvn9\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:37.165101      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-75f58dbc9d-fp2qp\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:37.430257      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6293-7950/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:37.430713      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6293-7950/csi-mockplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:37.430821      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6293-7950/csi-mockplugin-resizer-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:37.666504      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8376-2690/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:37.673908      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8376-2690/csi-mockplugin-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:37.878686      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8203/hostexec-nodes-us-central1-a-tdxw-9tn26\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:37.975874      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-153-8153/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:38.029418      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8328-925/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:38.035661      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-542-6014/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:38.050749      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-8328/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:14:38.427057      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-68c48f9ff9-5hb78\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:44.148348      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-8328/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:14:45.329835      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-68c48f9ff9-862jq\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:45.385579      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-68c48f9ff9-dzxfz\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:45.411030      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-75f58dbc9d-dvfr5\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:45.580160      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-75f58dbc9d-98bh2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:46.186260      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3276/pod-d373275b-042e-41c0-bfd6-1d2f3639be64\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:14:46.243053      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-8328/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:14:46.458196      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4520/execpodkv9wj\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:47.944919      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-7359/test-rolling-update-controller-khf2q\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:48.150330      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-3557/downwardapi-volume-8e1049f7-7f39-4d2d-a643-821e118812ea\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:48.657935      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/up-down-2-45842\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:48.661535      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/up-down-2-pkmmp\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:48.669827      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/up-down-2-kz78g\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:49.199631      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:49.212092      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:50.246908      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-8328/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:14:50.248312      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:50.249808      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:50.767301      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-2832/hostexec-nodes-us-central1-a-50vm-54n7c\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:50.915561      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/service-proxy-toggled-k6xp9\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:50.950654      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/service-proxy-toggled-sblq8\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:50.956583      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/service-proxy-toggled-kdlrx\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:51.236949      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pfpod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:51.333599      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-6564/sample-webhook-deployment-5f8b6c9658-wxsqs\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:52.639122      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:52.640853      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:52.641567      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pfpod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:53.041713      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-7359/test-rolling-update-deployment-8684b45d9-z7vw9\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:53.302135      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-8870/suspend-false-to-true-th928\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:53.328076      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-8870/suspend-false-to-true-krdqm\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:55.819368      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4520/pod2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:55.852211      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2171/pod-subpath-test-preprovisionedpv-8dzq\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:56.001744      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-542/pod-b44d83e1-0473-462d-82db-6a3ba73a9ce0\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:56.030805      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pfpod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:56.266136      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:56.303543      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8203/pod-subpath-test-preprovisionedpv-vgw2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:56.635637      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-4133/pod-subpath-test-preprovisionedpv-vg8w\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:56.809897      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-7961/pod-projected-secrets-dc000de4-dbb6-4650-871d-3057567764c6\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:56.902093      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7756/pod-subpath-test-preprovisionedpv-pb5d\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:57.236086      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-2832/exec-volume-test-preprovisionedpv-vk4v\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:14:57.254285      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:57.255926      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:57.690938      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:14:58.254562      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-8328/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:14:58.258562      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:14:58.359026      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-2458/pod-0\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:00.255866      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pfpod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:15:00.306823      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"conntrack-8516/pod-client\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:00.865521      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1100/test-pod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:15:01.256548      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-4029/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient ephemeral-storage. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.\"\nI0623 07:15:01.539466      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5093-2197/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:03.012211      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:03.067898      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-2642/exceed-active-deadline-mnhng\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:03.110360      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-2642/exceed-active-deadline-c8cm9\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:03.111970      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-8497/update-demo-nautilus-5nwt8\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:03.199053      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-8497/update-demo-nautilus-p7z69\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:04.094736      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-9326/exec-volume-test-inlinevolume-vljl\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:05.027670      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/verify-service-up-exec-pod-qxj7s\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:05.173203      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-7525/image-pull-testd84866ff-4f08-4397-a236-901161c69ca4\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:05.365854      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-6968b6cc76-4fr7h\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:05.375305      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-6968b6cc76-kj44j\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:05.394480      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-6968b6cc76-nm9jf\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:05.625825      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6293/pvc-volume-tester-67v94\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:06.113280      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8376/pvc-volume-tester-tdk65\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:06.194614      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2389/pod-subpath-test-inlinevolume-996z\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:06.672551      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-wl4r8\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.705608      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-mfftp\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.705981      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-5jtf4\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.764419      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-jd42x\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.785963      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-q7ztk\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.802291      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-phlvj\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.805613      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-q22l5\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.823538      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-g2cqt\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.823880      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-gdm56\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.855361      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-9ttd8\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.893892      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-wzhc9\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.894667      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-d2d49\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.894784      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-qkfwp\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.895375      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-dpjrk\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.922797      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-7rg8p\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.933718      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-ssp5z\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.945435      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-msk4w\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.945513      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-hc6r5\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.988330      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-5dbcr\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.997225      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-nw45n\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.997225      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-sjgvx\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.997409      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-rkpgn\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.997528      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-724dm\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.997788      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-66vvh\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:06.998093      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-gmv45\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.015607      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-ndxfh\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.021598      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-cdvrl\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.021759      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-xlh9q\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.026441      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-lhq26\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.026599      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-9wc8z\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.027045      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-kmgvc\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.050878      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1100/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:15:07.126659      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-8dfgv\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.128401      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-sb7h9\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.161040      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-d5nrr\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.192508      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-4jkfr\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.248660      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-q6l9r\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.299999      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-rwbf7\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.336858      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-crmpm\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.378759      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-pb5z8\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.490205      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-5418/cleanup40-9bdea1ea-fe39-4119-a73d-289e75be54c9-xwx6m\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.717958      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1695/externalname-service-9vdht\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.761537      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/verify-service-up-exec-pod-v65m2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:07.816304      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1695/externalname-service-6rcmc\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:08.272179      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1100/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:15:08.281183      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8328/hostpath-injector\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:10.814803      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3703/pod-91706ff7-17ab-4f7e-b728-ee5e20020cc7\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:15:10.948347      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7756/pod-subpath-test-preprovisionedpv-pb5d\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:11.561536      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"hostpath-3036/pod-host-path-test\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:13.165479      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8130/hostexec-nodes-us-central1-a-tdxw-vk4wv\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:14.347864      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-5999656f7d-5qbzg\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:14.382488      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-5999656f7d-xwcn2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:14.383675      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-5999656f7d-khhh4\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:15.576225      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5093/pod-subpath-test-dynamicpv-p9xd\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:16.043498      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-153/pod-subpath-test-dynamicpv-hhdv\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:16.143868      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-3545/sample-webhook-deployment-5f8b6c9658-zw4tt\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:16.343025      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-9011/pod-projected-secrets-27c1db0f-e98b-4a3b-a6f6-7a6d1ca20cfe\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:16.362993      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-7772/pod-projected-configmaps-85fa83c4-460d-4305-86b3-acd51f9b01d4\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:16.800695      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:21.717531      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-5999656f7d-tr8tx\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:23.291550      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3256-3854/csi-mockplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:23.293820      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3256-3854/csi-mockplugin-resizer-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:23.407903      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6819/hostexec-nodes-us-central1-a-m5w1-s5mqp\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:24.054950      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/verify-service-down-host-exec-pod\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:24.325217      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"conntrack-8516/pod-server-1\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:25.466237      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-292/pod-0c3ab368-8a60-445e-852c-cc516c7967a8\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:25.507658      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-9282/webserver-5999656f7d-4ldmv\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:25.561102      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-8497/update-demo-nautilus-rsknq\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:25.603091      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1695/execpodp5hsm\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:25.870652      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"containers-4203/client-containers-3013da87-b924-47a7-aeb6-0309697e8985\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:27.270131      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5194/hostexec-nodes-us-central1-a-m5w1-hcct2\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:28.533763      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-3511/hostexec-nodes-us-central1-a-50vm-rlp96\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:29.184920      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-8910/test-orphan-deployment-68c48f9ff9-bxzcg\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:32.528394      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/verify-service-down-host-exec-pod\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:32.724797      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-expansion-1577/hostexec-nodes-us-central1-a-m5w1-tm6z9\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:32.818402      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/verify-service-up-exec-pod-mrhl7\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:35.705950      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6293/pvc-volume-tester-mpqql\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:35.780665      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-6477/test-dns-nameservers\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:36.371828      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"subpath-8563/pod-subpath-test-downwardapi-m4vj\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:38.792200      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-8328/hostpath-client\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:15:40.301442      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8328/hostpath-client\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:41.350219      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-6574/test-ss-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:41.496491      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8130/pod-c853c5de-7fc3-4cd5-98be-0cd6c0d3429b\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:15:41.582569      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-9284/pod-secrets-9cbece7d-ccf4-43b9-8936-8f39bc603b55\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:41.655004      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3256/pvc-volume-tester-77dzh\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:41.666476      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-8432/pod-c5d21adf-e0ba-40bc-ac63-0abb4aac36bc\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:42.144312      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6819/pod-subpath-test-preprovisionedpv-mhnr\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:42.181023      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-wrapper-1827/pod-secrets-c4deac6d-d9a5-4624-a93c-7c54f1261923\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:42.464171      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"conntrack-8516/pod-server-2\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:42.758796      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-3511/hostexec-nodes-us-central1-a-nk1s-9t5hj\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:42.829355      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:43.586703      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2942/inline-volume-rd959\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-rd959-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:15:43.687747      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5093/pod-subpath-test-dynamicpv-p9xd\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:43.985582      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-7890/logs-generator\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:45.427509      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-expansion-1577/pod-589b00fd-b6da-4b34-8ee1-589f71f6a78a\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:15:45.906514      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2942/inline-volume-tester-tfxk8\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-tfxk8-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:15:45.983299      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-2942-7916/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:47.861851      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-358/pod-subpath-test-inlinevolume-2hmx\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:49.115351      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-3511/pod1\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:49.226339      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-559/httpd\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:49.397401      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-3435/busybox-privileged-false-61c0494c-2abf-48db-8375-8e0c34cdf6f2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:50.866628      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-6742/httpd-deployment-79c68f679b-6dxfh\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:50.886978      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-6742/httpd-deployment-79c68f679b-9rhzj\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:51.283113      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-6742/httpd-deployment-79c68f679b-cbhxd\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:51.486781      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-6742/httpd-deployment-79bc68c759-h7hk6\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:51.651013      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-5598/pod-configmaps-79a77f33-98f4-49a2-8d83-176e0edcb921\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:51.844474      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8130/pod-e7e5c22a-64a4-4708-91df-f720c0fda9d9\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:15:52.302419      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/verify-service-down-host-exec-pod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:52.314619      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-2942/inline-volume-tester-tfxk8\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:54.216168      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6819/pod-subpath-test-preprovisionedpv-mhnr\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:55.191524      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-3511/hostexec-nodes-us-central1-a-50vm-jr5gs\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:55.584619      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-283/netserver-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:15:55.612155      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-283/netserver-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:15:55.612689      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-283/netserver-2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:15:55.629737      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-283/netserver-3\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:15:55.710428      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5194/pod-subpath-test-preprovisionedpv-2qh9\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:15:55.806407      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-2137/image-pull-test96fa4e01-6b26-4c44-b0fe-547d020e3944\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:55.914980      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-5888/pod-projected-configmaps-d3870644-199b-4586-a35a-8494701297a5\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:56.865113      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/verify-service-up-exec-pod-5bw7t\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:15:57.959683      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-100/hostexec-nodes-us-central1-a-tdxw-46dvf\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:00.418239      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-7445/pod-ready\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:00.548579      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:00.960496      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-219/security-context-3b00c823-7dab-4078-ba6c-8a4395cd0845\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:01.390529      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-4439/hostexec-nodes-us-central1-a-50vm-khgwn\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:01.644176      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:01.662727      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:01.671575      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:01.776485      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-1443/verify-service-down-host-exec-pod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:02.623456      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2255/hostexec-nodes-us-central1-a-nk1s-7vwk2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:02.797594      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8649/hostexec-nodes-us-central1-a-50vm-n2fpz\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:03.307568      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-3511/hostexec-nodes-us-central1-a-nk1s-gx4fs\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:04.510485      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-4272/security-context-feebee97-c3f3-485f-9291-4dcfe1872b0b\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:04.561571      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/verify-service-up-exec-pod-x9ss9\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:05.427853      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-1\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:07.892992      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8694/hostexec-nodes-us-central1-a-50vm-j7pkk\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:08.515990      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:08.559579      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-1681/metadata-volume-b8ebee5e-79e3-41b8-b138-63b030ae47f5\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:08.808475      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-4253/downwardapi-volume-f186eace-e999-4e34-9311-9bb2ce66d73a\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:08.983789      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8649/pod-9aa2a7f5-d425-4eab-891d-d0694d9a1827\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:16:09.066069      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2255/pod-fb84782c-02d9-4e4a-a3bb-cfc520498579\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:16:09.177359      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-2080/pod-8ae51c15-5a02-4890-80d6-83a5cfab0b47\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:10.419731      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-1\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:10.850905      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:11.562355      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-4439/pod-subpath-test-preprovisionedpv-rzxb\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:11.714409      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7028-1146/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:11.829363      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-981/inline-volume-8kjhd\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-8kjhd-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:16:12.281352      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7923-7218/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:12.299521      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7923-7218/csi-mockplugin-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:12.411230      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-100/pod-subpath-test-preprovisionedpv-gjhj\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:12.656710      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:12.814379      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"proxy-2881/proxy-service-krbzn-dxmkp\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:13.792882      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/up-down-3-bdjcp\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:13.854775      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/up-down-3-gvgrx\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:13.855921      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/up-down-3-pw9cb\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:14.522422      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-981/inline-volume-tester-ccpng\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-ccpng-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:16:14.602747      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-981-8440/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:15.395722      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-1604/all-pods-removed-q8k5j\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:15.397166      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-1604/all-pods-removed-z8rlx\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:15.679292      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7742/hostexec-nodes-us-central1-a-m5w1-w2q84\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:15.736000      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7028/pod-subpath-test-dynamicpv-p6qb\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:17.113394      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-201/pod-1013db6e-463a-413c-8ff9-8c7375a9f6fc\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:17.293563      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-3\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:17.499373      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-9414/test-grpc-13a05707-87a2-4932-9e04-553f796ec16d\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.693388      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-7n9fl\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.711227      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-nzth9\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.717811      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-gw6rh\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.751300      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-j82rc\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.758775      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-82dbq\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.758895      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-nk4q4\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.758969      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-ztrlw\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.759052      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-xhvhn\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.809455      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-cb5lr\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.841499      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-lqmdv\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.847269      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-gzvw4\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.847232      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-n4dvm\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.847479      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-s6pr8\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.860556      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-rqzh9\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.860610      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-9k2wn\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.932570      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-qfdwh\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.932784      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-gqc8f\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.932966      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-s4dmk\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.940870      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-ck2p2\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.940944      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-gpprg\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.947594      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-8q6wt\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.947953      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-4f6mn\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.948047      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-z26jz\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.948135      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-zb2mr\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.948232      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-sj27d\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.948488      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-f7qxm\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.955468      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-pxc2z\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.955617      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-x878t\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.956256      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-rc72z\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:18.970692      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-nvr4v\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.004004      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-tlsp2\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.067105      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-v7zj9\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.114082      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-gs8p2\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.184426      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-kfzrx\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.241154      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-xv6mk\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.278531      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-5ndpv\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.311500      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-z89sp\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.360991      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-lswj5\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.413135      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-fbjxx\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.463509      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-wxktf\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.511883      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-8jkjt\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.559795      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-7859p\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.605077      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-lqg2v\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.651889      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-5b7xb\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.702485      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-7j78w\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.761051      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-wc98f\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.814373      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-8061/sample-webhook-deployment-5f8b6c9658-v9m8q\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.861322      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-xn5mg\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.873581      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-smlkf\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.906446      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-5rhcr\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:19.959546      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-zw5cw\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.018471      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-mb76d\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.064980      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-rbngp\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.115663      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-pkgg4\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.170762      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-xj56x\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.207257      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-rtm99\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.255688      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-kp5xz\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.304294      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-mv8kv\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.355540      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-742p5\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.429595      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-7jszj\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.472946      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-7458p\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.552927      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-bpdhb\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.584996      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-mvnlc\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.657404      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-9727/termination-message-containera9695ac8-f5a7-4e4d-807f-a05c4edc06b3\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.668932      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-r5nvd\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.729150      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-2934/dns-test-ba878833-2828-4c6f-a20d-3aa142f96234\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.746672      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-wvpsh\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.785128      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-9vbv7\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.825417      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-j8np4\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.859014      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-rqttn\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.904312      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-ktqj7\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:20.960636      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-df7ms\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.016697      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-d7cbp\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.061486      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-j5pgp\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.101060      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-sdvtb\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.155687      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-zkdvv\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.203619      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-bnbgm\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.253188      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-qxt7n\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.309071      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-ntgwh\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.350325      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-k2hgr\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.403878      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-pgxph\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.452487      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-gr6gt\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.512070      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-xq7bp\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.553412      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-tpcjv\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.602898      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-mrshb\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.652446      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-lp4hq\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.716382      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-dxwfg\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.756218      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-jqzbw\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.803976      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-2j8dg\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.863686      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-l7ntw\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.918899      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-ss8kw\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:21.957235      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-mxrzc\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:22.018532      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-hq8gj\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:22.055680      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-xqtgp\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:22.106785      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-dzf4m\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:22.155599      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-xk8tp\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:22.228038      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-b5x45\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:22.266632      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-524b6\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:22.317193      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-fxkd6\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:22.355893      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-8g6j5\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:22.409746      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-hwzh6\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:22.480987      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-dglcq\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:22.510723      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4026/simpletest-rc-to-be-deleted-xljms\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:25.694283      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:26.537324      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8694/pod-subpath-test-preprovisionedpv-bnsc\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:27.213575      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-9622/pod-projected-secrets-7e25587f-70f3-45ba-aa8c-8b152474c469\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:27.838410      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7923/pvc-volume-tester-9ndph\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:29.418541      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-6715/pod-configmaps-a34fc5ca-64a3-43de-a670-109908b7a6ed\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:29.914899      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-8505/suspend-false-to-true-mlbfv\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:29.935141      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-8505/suspend-false-to-true-r957q\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:34.296591      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-4\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:36.374607      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replication-controller-4231/pod-release-tcwln\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:37.054182      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-3\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:37.668715      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-283/test-container-pod\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:38.501207      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-7426/termination-message-container2bbc980c-58f9-47fe-80e7-7ab7ab649aa3\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:39.266484      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-3\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:39.362041      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-981/inline-volume-tester-ccpng\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:40.004576      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-4\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:40.830337      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:40.964046      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pvc-protection-1962/pvc-tester-cph9s\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:41.429380      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replication-controller-4231/pod-release-86mh5\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:42.080802      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7742/pod-4b649149-0763-4e8c-ae06-fffe9213bc51\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:16:42.654655      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-8222/pod-projected-configmaps-470f14d6-1ac2-4fc8-a19c-415c21820b9e\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:43.656086      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-9669/pod-configmaps-ae40fc52-9343-47d5-85f4-0e35367c7171\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:44.882338      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-5\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:45.078518      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-2531/agnhost-primary-dt76p\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:45.213684      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-5331/e2e-k9gll-9ddj9\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:45.234533      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-5331/e2e-k9gll-wvxvj\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nE0623 07:16:45.282373      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"agnhost-primary-p25jt\\\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"kubectl-2531/agnhost-primary-p25jt\"\nI0623 07:16:45.282480      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"kubectl-2531/agnhost-primary-p25jt\"\nE0623 07:16:45.282634      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-p25jt\\\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated\" pod=\"kubectl-2531/agnhost-primary-p25jt\"\nE0623 07:16:45.292278      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"agnhost-primary-p25jt.16fb2eac3d133116\", GenerateName:\"\", Namespace:\"kubectl-2531\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"kubectl-2531\", Name:\"agnhost-primary-p25jt\", UID:\"3f9b3bce-ba7f-4562-9f41-f1e9e708878d\", APIVersion:\"v1\", ResourceVersion:\"7361\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-p25jt\\\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 16, 45, 282709782, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 16, 45, 282709782, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"agnhost-primary-p25jt.16fb2eac3d133116\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated' (will not retry!)\nI0623 07:16:45.415703      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-5096/metadata-volume-a4597fd6-006c-4401-9228-87415c3be231\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:46.053146      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-8285/downwardapi-volume-92ed91dd-878c-4999-ae67-38a1ac05121c\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nE0623 07:16:46.354630      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"agnhost-primary-p25jt\\\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"kubectl-2531/agnhost-primary-p25jt\"\nI0623 07:16:46.354674      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"kubectl-2531/agnhost-primary-p25jt\"\nE0623 07:16:46.355050      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-p25jt\\\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated\" pod=\"kubectl-2531/agnhost-primary-p25jt\"\nE0623 07:16:46.363129      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"agnhost-primary-p25jt.16fb2eac3d133116\", GenerateName:\"\", Namespace:\"kubectl-2531\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"kubectl-2531\", Name:\"agnhost-primary-p25jt\", UID:\"3f9b3bce-ba7f-4562-9f41-f1e9e708878d\", APIVersion:\"v1\", ResourceVersion:\"7363\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-p25jt\\\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 16, 45, 282709782, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 16, 46, 355184097, time.Local), Count:2, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"agnhost-primary-p25jt.16fb2eac3d133116\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated' (will not retry!)\nI0623 07:16:46.802695      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-5\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:46.857261      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/verify-service-up-exec-pod-c9gfc\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:48.303257      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7742/pod-48c55389-8c40-4c7b-b331-3acbfc5b78f8\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:16:48.439584      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-3831/sample-webhook-deployment-5f8b6c9658-zf9dx\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:49.026478      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-3607/pod-projected-configmaps-93dfbe64-4114-4c85-952a-b54c03236406\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nE0623 07:16:49.355912      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"agnhost-primary-p25jt\\\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"kubectl-2531/agnhost-primary-p25jt\"\nI0623 07:16:49.355955      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"kubectl-2531/agnhost-primary-p25jt\"\nE0623 07:16:49.356006      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-p25jt\\\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated\" pod=\"kubectl-2531/agnhost-primary-p25jt\"\nE0623 07:16:49.362594      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"agnhost-primary-p25jt.16fb2eac3d133116\", GenerateName:\"\", Namespace:\"kubectl-2531\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"kubectl-2531\", Name:\"agnhost-primary-p25jt\", UID:\"3f9b3bce-ba7f-4562-9f41-f1e9e708878d\", APIVersion:\"v1\", ResourceVersion:\"7363\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"agnhost-primary-p25jt\\\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 16, 45, 282709782, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 16, 49, 356344809, time.Local), Count:3, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"agnhost-primary-p25jt.16fb2eac3d133116\" is forbidden: unable to create new content in namespace kubectl-2531 because it is being terminated' (will not retry!)\nI0623 07:16:49.590958      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-2374/dns-test-de1e54b0-87a7-46a2-b20f-4f8949d510b7\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:49.857590      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-4479/termination-message-containera658451c-755e-4478-a507-0b0720dfe1e9\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:51.007846      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-4\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:51.996983      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-1021/kube-proxy-mode-detector\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:52.260615      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-5331/e2e-k9gll-2g8d2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:52.965025      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1654/hostexec-nodes-us-central1-a-m5w1-kx6t7\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:53.020764      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-6\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:53.258038      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-8666/pod-c8e51d56-6a06-4908-a176-d5b275703f55\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:53.281826      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-4127/pod-secrets-37c3ca75-13b6-4e08-ae03-183265d2e92d\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:55.670079      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-5331/e2e-k9gll-p4hcs\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:55.797465      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-7\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:55.819678      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-1084/busybox-e9c1e2b5-daca-41ce-b0e8-63d513334468\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:56.516409      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-2068/pod-configmaps-7377606a-0081-464a-be6e-0a5e46440598\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:57.706670      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-874-2266/csi-mockplugin-0\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:57.709784      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-874-2266/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:16:58.026967      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:58.577752      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-6\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:59.136302      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-5\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:16:59.744425      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"clientset-2178/pod7e59118d-eba2-499b-915d-7fa09b2ee460\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:00.096215      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3135-2/csi-mockplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:00.398741      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-4382/netserver-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:17:00.438509      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-4382/netserver-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:17:00.482377      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-4382/netserver-2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:17:00.504703      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-4382/netserver-3\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:17:01.742687      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-8\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:02.262189      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubelet-test-800/bin-falsed394609c-f79a-4882-a919-f41695abaa38\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:02.630312      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"port-forwarding-6726/pfpod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:03.495477      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-6538/pod-projected-configmaps-670df27a-e01b-4de4-b80a-2bec0b8308cd\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:04.064097      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-8012/pod-handle-http-request\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:04.198323      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-7\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:05.423086      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7447/hostexec-nodes-us-central1-a-50vm-t8mng\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:06.269511      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-1021/hostexec-nodes-us-central1-a-tdxw-wczn5\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:06.666134      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-6\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:07.094358      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3135/pvc-volume-tester-2j4rb\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:07.644677      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8290-1939/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:07.666144      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8290-1939/csi-mockplugin-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:07.727904      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"proxy-5607/agnhost\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:07.737656      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-9\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:07.844932      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-1602/metadata-volume-07cdad16-68d3-4611-9a65-b8b5b4764d56\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:08.066793      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4894/verify-service-up-exec-pod-twm4q\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:08.462262      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-4890/test-pod-4f4f792f-e258-42ac-a22d-faada81fc74b\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:09.320920      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-599/hostexec-nodes-us-central1-a-m5w1-pgjkt\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:09.691030      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1654/pod-b963a4de-9423-4230-a3c5-56c743adb29b\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:17:10.199657      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-8\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:10.386022      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-1021/hostexec-nodes-us-central1-a-50vm-b4xck\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nE0623 07:17:10.761653      10 framework.go:1046] \"Failed running Bind plugin\" err=\"Operation cannot be fulfilled on pods/binding \\\"bin-false56628563-350d-49e9-8eec-714095c37f98\\\": pod bin-false56628563-350d-49e9-8eec-714095c37f98 is being deleted, cannot be assigned to a host\" plugin=\"DefaultBinder\" pod=\"kubelet-test-8203/bin-false56628563-350d-49e9-8eec-714095c37f98\"\nI0623 07:17:10.762121      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"kubelet-test-8203/bin-false56628563-350d-49e9-8eec-714095c37f98\"\nE0623 07:17:10.762404      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": Operation cannot be fulfilled on pods/binding \\\"bin-false56628563-350d-49e9-8eec-714095c37f98\\\": pod bin-false56628563-350d-49e9-8eec-714095c37f98 is being deleted, cannot be assigned to a host\" pod=\"kubelet-test-8203/bin-false56628563-350d-49e9-8eec-714095c37f98\"\nI0623 07:17:11.624081      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"events-6054/send-events-b8142f70-4870-43cf-9eac-67492cc98a5e\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:11.994206      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-7\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:12.487364      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-4741/pfpod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:17:13.956658      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-874/pvc-volume-tester-cj6m7\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:14.007142      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-10\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:14.430842      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-2715/pod-service-account-defaultsa\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:14.448162      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-2715/pod-service-account-mountsa\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:14.455162      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-2715/pod-service-account-nomountsa\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:14.481268      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-2715/pod-service-account-defaultsa-mountspec\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:14.496551      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-2715/pod-service-account-mountsa-mountspec\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:14.496550      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-2715/pod-service-account-nomountsa-mountspec\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:14.524962      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-2715/pod-service-account-defaultsa-nomountspec\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:14.557558      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-2715/pod-service-account-mountsa-nomountspec\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:14.563957      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-2715/pod-service-account-nomountsa-nomountspec\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nE0623 07:17:14.588779      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-service-account-nomountsa-nomountspec.16fb2eb30e5f1284\", GenerateName:\"\", Namespace:\"svcaccounts-2715\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"svcaccounts-2715\", Name:\"pod-service-account-nomountsa-nomountspec\", UID:\"ab94ea70-d47e-433c-bbe4-11e66e3e235c\", APIVersion:\"v1\", ResourceVersion:\"8866\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned svcaccounts-2715/pod-service-account-nomountsa-nomountspec to nodes-us-central1-a-50vm\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 17, 14, 563924612, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 17, 14, 563924612, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-service-account-nomountsa-nomountspec.16fb2eb30e5f1284\" is forbidden: unable to create new content in namespace svcaccounts-2715 because it is being terminated' (will not retry!)\nI0623 07:17:14.643611      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-8427/liveness-f3b7b7e1-b86d-4860-bd93-10f18e8b1e8c\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:15.037553      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-4810/test-pod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 Insufficient example.com/dongle, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:17:15.990668      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-874/inline-volume-g8kv8\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:17.967800      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-4080-2752/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:18.029714      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-7063/test-new-deployment-68c48f9ff9-jxktg\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:18.081011      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-8012/pod-with-poststart-http-hook\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:18.547305      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-4741/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:17:19.124042      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8290/pvc-volume-tester-9mmn2\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:20.063760      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-4818/downward-api-c485ab9f-e604-45d7-a335-04f07be328aa\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:20.899096      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4608/simpletest.rc-8m4pf\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:20.917435      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-4608/simpletest.rc-gkdwc\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:21.236935      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"prestop-9479/server\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:21.416018      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-8\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:21.998574      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-11\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:22.249501      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3882/test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:22.323330      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-4421/probe-test-38a8854b-10ea-4312-8e02-6bffc0837ebc\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:23.797917      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-9\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:23.979612      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-4080/pod-0d8a66ee-eb33-4cbc-ad17-5987d97177dc\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:24.396018      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-9\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:24.551475      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-4382/test-container-pod\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:24.566548      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-4382/host-test-container-pod\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:24.711909      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"init-container-9493/pod-init-7981d87c-9c4a-4b64-8169-f746f04b1555\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:24.916519      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-813/downward-api-95c79c3e-186a-46f7-8ad3-734cf2997a93\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:25.021480      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4522/netserver-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:17:25.059474      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4522/netserver-2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:17:25.059905      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4522/netserver-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:17:25.075850      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4522/netserver-3\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:17:26.103535      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7447/pod-subpath-test-preprovisionedpv-c6kp\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:26.525235      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-3868/pod-configmaps-7bada886-2e62-46e6-a3fd-d53307ce9098\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:26.779129      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-1021/hostexec-nodes-us-central1-a-tdxw-644mx\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:26.838870      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-6049/hostexec-nodes-us-central1-a-nk1s-sbvz7\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:26.955872      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-3155-628/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:27.517576      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-599/pod-subpath-test-preprovisionedpv-ktqb\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:28.648658      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-12\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:29.321450      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"prestop-9479/tester\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:29.867979      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-10\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:30.303618      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3882/test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:30.719676      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-10\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:31.193352      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-679/pod-projected-secrets-6d279977-155e-404f-a15b-4f5707b69d48\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:32.383778      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"containers-6707/client-containers-31e5c973-3262-4493-acf6-f4023d13f7fe\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:32.987349      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-3155/pod-37597836-fca5-4357-ace5-89daa4be2dae\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:34.778393      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-1056/liveness-c0cf88fd-c96b-4834-8e5d-b146fbe109aa\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:35.220372      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-13\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:36.427527      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-11\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:36.524877      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"port-forwarding-216/pfpod\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:36.886337      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-4260/deployment-6c468f5898-m45gw\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:36.936630      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-1021/hostexec-nodes-us-central1-a-tdxw-g5l64\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:36.937166      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-4260/deployment-6c468f5898-r697j\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:36.937459      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-4260/deployment-6c468f5898-w96xs\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nE0623 07:17:36.953509      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-6c468f5898-r697j.16fb2eb843eb58cd\", GenerateName:\"\", Namespace:\"apply-4260\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"apply-4260\", Name:\"deployment-6c468f5898-r697j\", UID:\"12ec91b0-cdbd-4094-bcb3-2789e440abcd\", APIVersion:\"v1\", ResourceVersion:\"9866\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned apply-4260/deployment-6c468f5898-r697j to nodes-us-central1-a-m5w1\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 17, 36, 937146573, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 17, 36, 937146573, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-6c468f5898-r697j.16fb2eb843eb58cd\" is forbidden: unable to create new content in namespace apply-4260 because it is being terminated' (will not retry!)\nE0623 07:17:36.965310      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-6c468f5898-w96xs.16fb2eb843efc57e\", GenerateName:\"\", Namespace:\"apply-4260\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"apply-4260\", Name:\"deployment-6c468f5898-w96xs\", UID:\"1adf01dd-0c1d-45a5-b0dd-429f759b9401\", APIVersion:\"v1\", ResourceVersion:\"9864\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned apply-4260/deployment-6c468f5898-w96xs to nodes-us-central1-a-50vm\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 17, 36, 937436542, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 17, 36, 937436542, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-6c468f5898-w96xs.16fb2eb843efc57e\" is forbidden: unable to create new content in namespace apply-4260 because it is being terminated' (will not retry!)\nI0623 07:17:37.478968      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8199-849/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:38.346344      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3882/test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:38.454484      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-656/ss-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:40.460987      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-11\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:40.749982      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-859dd947bd-q2k9t\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:40.854091      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-859dd947bd-xhlmg\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=2\nI0623 07:17:40.854823      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-859dd947bd-w9v87\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=3\nI0623 07:17:41.039352      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-4110/test-deployment-wb7lp-6465649447-mhc8m\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:41.305492      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-6049/local-injector\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:41.502283      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-9822/inline-volume-tester-t745n\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:41.523173      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-9822-9134/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:43.242296      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-1021/hostexec-nodes-us-central1-a-50vm-rccf6\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:43.519001      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8199/hostpath-injector\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:44.112033      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-4080/pod-70d2acd0-beb1-4e91-90f7-03aa1e20da04\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:44.544636      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-6867/pod-handle-http-request\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:44.590746      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6834/pod-subpath-test-inlinevolume-xsxw\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:44.613750      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-2-14\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:44.750835      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-runtime-6301/image-pull-test6520f671-56b4-4657-9ac5-342d9f7e57ae\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:48.399208      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"svcaccounts-3882/test-pod-5b43d21f-6834-4d02-90b8-ace9b2e2894e\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:48.571190      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-6867/pod-with-poststart-exec-hook\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:49.222949      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-2939/liveness-9ff4f716-cd2e-4ff4-b2f4-453762944992\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:50.072005      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-2956/pod-1cfff29e-66c4-4980-80c4-cc4f01cf1635\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:53.214887      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4522/test-container-pod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:53.214799      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-4522/host-test-container-pod\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:53.310643      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-4661/hostexec-nodes-us-central1-a-m5w1-gp8ff\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:53.559693      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-5650/busybox1\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:54.408270      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-12\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:54.539179      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2574/pod-subpath-test-inlinevolume-46jr\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:55.792744      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-12\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:56.168911      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-5770/pod-logs-websocket-f6659ec1-8080-46f4-a098-ba219073023f\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:58.253379      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1394/pod-subpath-test-inlinevolume-kvxh\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:17:58.793746      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-7867/annotationupdatef2a9a7e8-a591-4aea-bd05-5d05271fab66\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:17:59.628662      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-13\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:00.155780      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"cronjob-5091/failed-jobs-history-limit-27599478-qhdt8\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:00.189609      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-13\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:00.717120      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-2215/pod-secrets-9320c367-cee4-4a56-89f0-98e5d54b5bbb\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:01.606586      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-5531/pod-secrets-27b95091-badb-4fca-90e9-74e768c3e75a\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:03.008348      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-1-14\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:03.065626      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-272/test-ss-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:04.042873      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-6049/local-client\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:04.344353      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-4746/startup-b90bd9ef-5fb0-4ddb-ba00-59da20022ed9\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:05.232482      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-6611/httpd\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:05.820828      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-4931/pod-terminate-status-0-14\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:06.264648      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-6098/pod-projected-secrets-c3a2db0c-b451-4c50-9238-66a9b583792c\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:06.895283      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7871/hostexec-nodes-us-central1-a-nk1s-pw6rf\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:07.547116      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2941/netserver-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:07.561072      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2941/netserver-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:07.577520      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2941/netserver-2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:07.610243      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2941/netserver-3\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:09.117915      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1870-2330/csi-mockplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:09.168262      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1870-2330/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:09.527698      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-5836/sample-webhook-deployment-5f8b6c9658-dgndk\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:10.043881      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-7924/pod-65bde3af-e90c-4580-80d7-110688f13cce\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:11.541556      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-4661/exec-volume-test-preprovisionedpv-h4g8\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:11.680856      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-6991/pod-configmaps-ad855b66-a263-43dc-b1ea-f78e460a6239\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:13.307554      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-272/test-ss-1\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:13.883763      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pvc-protection-5034/pvc-tester-wjws8\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:14.267893      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-656/ss-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:14.409237      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8199/hostpath-client\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:14.646241      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-4791/hostexec-nodes-us-central1-a-tdxw-6vdtl\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:15.511830      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2798-2482/csi-mockplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:16.208678      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1870/pvc-volume-tester-mjcxz\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:16.339307      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3371/pod-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:16.366588      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3371/pod-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:16.376357      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-3371/pod-2\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:17.973081      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"cross-namespace-pod-affinity-3123/no-cross-namespace-affinity\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:18:18.002604      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4611/ss-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:18.011215      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"cross-namespace-pod-affinity-3123/with-namespaces\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:18:18.109512      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-1056/all-succeed-blzng\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:18.128277      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-1056/all-succeed-97l9k\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:19.344985      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-6611/failure-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:20.024220      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"cross-namespace-pod-affinity-3123/with-namespace-selector\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:18:20.218622      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-272/test-ss-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:21.575373      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-5491/dns-test-c5d55c9a-15f8-4624-be68-14942194c7de\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:23.350139      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-1056/all-succeed-kwkfc\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:23.529552      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-830/hostexec-nodes-us-central1-a-nk1s-qn8qv\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:24.304986      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-9579/deployment-5cd6f9c8c9-nw954\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nE0623 07:18:24.344490      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-5cd6f9c8c9-nw954.16fb2ec34b42da90\", GenerateName:\"\", Namespace:\"apply-9579\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"apply-9579\", Name:\"deployment-5cd6f9c8c9-nw954\", UID:\"009c4a1e-03f6-48a8-8299-349283cddabc\", APIVersion:\"v1\", ResourceVersion:\"11577\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned apply-9579/deployment-5cd6f9c8c9-nw954 to nodes-us-central1-a-m5w1\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 18, 24, 304962192, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 18, 24, 304962192, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-5cd6f9c8c9-nw954.16fb2ec34b42da90\" is forbidden: unable to create new content in namespace apply-9579 because it is being terminated' (will not retry!)\nE0623 07:18:24.345110      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"deployment-6c468f5898-575bg\\\" is forbidden: unable to create new content in namespace apply-9579 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"apply-9579/deployment-6c468f5898-575bg\"\nI0623 07:18:24.345368      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"apply-9579/deployment-6c468f5898-575bg\"\nE0623 07:18:24.345589      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"deployment-6c468f5898-575bg\\\" is forbidden: unable to create new content in namespace apply-9579 because it is being terminated\" pod=\"apply-9579/deployment-6c468f5898-575bg\"\nE0623 07:18:24.345926      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"deployment-5cd6f9c8c9-t896f\\\" is forbidden: unable to create new content in namespace apply-9579 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"apply-9579/deployment-5cd6f9c8c9-t896f\"\nI0623 07:18:24.345982      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"apply-9579/deployment-5cd6f9c8c9-t896f\"\nE0623 07:18:24.346138      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"deployment-5cd6f9c8c9-t896f\\\" is forbidden: unable to create new content in namespace apply-9579 because it is being terminated\" pod=\"apply-9579/deployment-5cd6f9c8c9-t896f\"\nI0623 07:18:24.346802      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-9579/deployment-5cd6f9c8c9-ld4n6\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:24.376615      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-1056/all-succeed-z9ds2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nE0623 07:18:24.376951      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-6c468f5898-575bg.16fb2ec34db25c2c\", GenerateName:\"\", Namespace:\"apply-9579\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"apply-9579\", Name:\"deployment-6c468f5898-575bg\", UID:\"1979f3eb-94fd-48fc-a978-7b4b73348da4\", APIVersion:\"v1\", ResourceVersion:\"11583\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"deployment-6c468f5898-575bg\\\" is forbidden: unable to create new content in namespace apply-9579 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 18, 24, 345824300, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 18, 24, 345824300, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-6c468f5898-575bg.16fb2ec34db25c2c\" is forbidden: unable to create new content in namespace apply-9579 because it is being terminated' (will not retry!)\nE0623 07:18:24.381660      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-5cd6f9c8c9-t896f.16fb2ec34db9c905\", GenerateName:\"\", Namespace:\"apply-9579\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"apply-9579\", Name:\"deployment-5cd6f9c8c9-t896f\", UID:\"dcb55923-aa3d-4d1b-852f-9d800e3766a6\", APIVersion:\"v1\", ResourceVersion:\"11580\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"deployment-5cd6f9c8c9-t896f\\\" is forbidden: unable to create new content in namespace apply-9579 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 18, 24, 346310917, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 18, 24, 346310917, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-5cd6f9c8c9-t896f.16fb2ec34db9c905\" is forbidden: unable to create new content in namespace apply-9579 because it is being terminated' (will not retry!)\nE0623 07:18:24.385110      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-5cd6f9c8c9-ld4n6.16fb2ec34dc08da4\", GenerateName:\"\", Namespace:\"apply-9579\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"apply-9579\", Name:\"deployment-5cd6f9c8c9-ld4n6\", UID:\"995fb007-414f-442a-92f5-299a2d370f53\", APIVersion:\"v1\", ResourceVersion:\"11579\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned apply-9579/deployment-5cd6f9c8c9-ld4n6 to nodes-us-central1-a-50vm\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 18, 24, 346754468, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 18, 24, 346754468, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-5cd6f9c8c9-ld4n6.16fb2ec34dc08da4\" is forbidden: unable to create new content in namespace apply-9579 because it is being terminated' (will not retry!)\nI0623 07:18:24.484419      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-7595/test-webserver-0446a98a-9e54-4e71-a000-31c803e8d3c4\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:24.542560      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-168/hostexec-nodes-us-central1-a-tdxw-ptwxw\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:26.431724      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1637/netserver-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:26.440020      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1637/netserver-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:26.460633      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1637/netserver-2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:26.464726      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1637/netserver-3\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:27.046973      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-4791/pod-subpath-test-preprovisionedpv-zn2g\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:27.204438      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7871/pod-subpath-test-preprovisionedpv-xdkl\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:27.569029      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2798/pvc-volume-tester-nxxsq\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:29.670244      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2941/test-container-pod\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:33.587791      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-4325/pod-subpath-test-inlinevolume-xbwf\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:33.669635      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-830/pod-7c84226a-3684-46a1-9a4b-9361b14c00a3\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:34.070388      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-4792/pod-ed6dcc54-4d52-402f-9a17-ca5c84504c9d\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:34.173612      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-7501/pod-projected-secrets-2069df3a-cf65-4b9c-ba3c-b8e0f2179161\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:37.303630      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-7871/pod-subpath-test-preprovisionedpv-xdkl\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:37.916252      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"pvc-protection-5034/pvc-tester-wxpw9\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-protectionl4ppg\\\" is being deleted. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:18:38.196810      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7893/inline-volume-wtb82\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-wtb82-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:18:38.302012      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-4213/hostexec-nodes-us-central1-a-50vm-gbc7s\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:38.640126      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-5777/dns-test-7103a61d-86c9-4fa7-a72e-5fe6b5ffc372\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:39.407136      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-6233/httpd\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:39.465618      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"pvc-protection-5034/pvc-tester-wxpw9\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-protectionl4ppg\\\" is being deleted. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:18:40.613770      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7893/inline-volume-tester-7s9zt\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-7s9zt-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:18:40.655864      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-7893-3419/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:40.741286      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-168/pod-3ac6dc21-3d1b-4d99-964b-3cb6953062cd\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:41.858486      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2114/pod-subpath-test-inlinevolume-5gd8\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:42.102582      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5172/hostexec-nodes-us-central1-a-tdxw-qqrd4\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:42.492430      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-4213/pod-24a92b53-8c02-43cf-b0d8-50c9c004f084\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:42.612261      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-7591/netserver-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:42.639942      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-7591/netserver-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:42.665150      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-7591/netserver-2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:42.668508      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-7591/netserver-3\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:44.520952      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-4213/hostexec-nodes-us-central1-a-50vm-x5278\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:44.614562      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3511/hostexec-nodes-us-central1-a-m5w1-859x2\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:46.488985      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-7893/inline-volume-tester-7s9zt\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:48.253694      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1635-2602/csi-mockplugin-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:48.779770      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-168/hostexec-nodes-us-central1-a-tdxw-fg67p\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:49.133718      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1975/hostexec-nodes-us-central1-a-50vm-9lmcq\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:50.357947      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-656/ss-2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:50.512445      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1637/test-container-pod\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:50.813954      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6503-296/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:50.874542      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3511/pod-55f6cc0c-6559-4cc1-98e1-e70d6fee099e\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:52.029234      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-865/projected-volume-a28074e3-012c-463d-8611-9c82ec25ec00\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:53.503687      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-3081/server-envvars-1d9b7e63-62ee-4742-83a5-86ba308486a2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:54.210349      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-4470/pod-configmaps-fdbcd3d3-320d-4e57-a542-5b7403f3e27f\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:54.600922      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-3593/implicit-root-uid\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:54.697282      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-5777/dns-test-774cba79-daed-4b92-ae4d-44685adb0e1e\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:54.818741      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-5fd59c7f94-g4vlg\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=3\nI0623 07:18:55.296005      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6386-4037/csi-mockplugin-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:55.296431      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6386-4037/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:55.338100      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1635/pvc-volume-tester-969xf\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:56.377473      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5172/pod-5f256169-8866-476f-9409-4573a2696246\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:18:56.848237      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6503/pod-subpath-test-dynamicpv-ts5r\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:57.345125      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-1229/sample-webhook-deployment-5f8b6c9658-p78r5\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:57.377278      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1975/pod-subpath-test-preprovisionedpv-xg94\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:18:57.483904      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-4777/downward-api-ea705ed9-68d3-4297-94f8-617e5ba6fead\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:18:57.893429      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2392/inline-volume-7ft76\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-7ft76-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:18:59.563907      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pods-3081/client-envvars-21556b4a-62d5-42b2-b966-a51c0fe69c72\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:00.199437      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"cronjob-5091/failed-jobs-history-limit-27599479-g6k8m\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:00.335046      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2392/inline-volume-tester-vfdvr\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-vfdvr-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:19:00.366582      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-2392-4001/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:00.423383      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8131/hostexec-nodes-us-central1-a-50vm-ppbsg\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:00.669141      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-9130/sample-webhook-deployment-5f8b6c9658-q5kt9\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:01.484687      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2392/inline-volume-tester-vfdvr\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:19:01.710839      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-5fd59c7f94-jbhq7\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=2\nI0623 07:19:02.612420      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5172/pod-76c7811a-adde-45ec-a3a8-f98840dd5d93\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:03.610050      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-3257/hostexec-nodes-us-central1-a-50vm-48q54\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:04.987430      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2392/inline-volume-tester-vfdvr\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:19:05.891917      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-5fd59c7f94-5z79n\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:06.860012      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-df5575f89-xmdfk\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=3\nI0623 07:19:08.648290      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-df5575f89-9zcvf\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=2\nI0623 07:19:09.051403      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-4727/test-pod\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:09.501929      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-2392/inline-volume-tester-vfdvr\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:10.556663      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8131/exec-volume-test-preprovisionedpv-qr27\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:10.833210      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6386/pvc-volume-tester-xkkvr\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:11.071289      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-3669/hostexec-nodes-us-central1-a-nk1s-jxq9w\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:11.102904      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-4727/test-host-network-pod\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:11.199156      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5605-2274/csi-mockplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:11.226386      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5605-2274/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:11.336219      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5605-2274/csi-mockplugin-resizer-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:11.854565      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"dns-5777/dns-test-5aded711-c216-437a-8f05-cdf4f5b80896\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:11.864458      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-3257/local-injector\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:13.946035      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2769/netserver-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:13.996952      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2769/netserver-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:14.032450      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2769/netserver-3\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:14.034295      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2769/netserver-2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:14.671402      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-df5575f89-xl5jc\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:14.807702      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"pod-network-test-7591/test-container-pod\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:16.921064      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-6cd9d94c6d-bj2hl\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=3\nI0623 07:19:18.056894      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-containers-test-4023/ephemeral-containers-target-pod\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:18.359573      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumelimits-6980-1264/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:19.233642      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8667/hostexec-nodes-us-central1-a-nk1s-m2mwq\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:22.748992      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5605/pvc-volume-tester-lf7dc\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:23.318463      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-415-7506/csi-mockplugin-0\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:23.318820      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-415-7506/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:24.758303      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-7149/pod-6a1fece5-094f-40d4-8f7d-4012e60a19b4\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:25.342014      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-6cd9d94c6d-5ngbc\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=2\nI0623 07:19:26.553894      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-7105/hostexec-nodes-us-central1-a-nk1s-7f9zq\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:26.656674      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replicaset-6339/condition-test-749c6\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:26.704125      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replicaset-6339/condition-test-nlmc5\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:26.927357      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-306/test-rolling-update-with-lb-6cd9d94c6d-7wtnl\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:27.282767      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-3669/exec-volume-test-preprovisionedpv-nn7f\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:27.511072      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8667/exec-volume-test-preprovisionedpv-s5s2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:27.906762      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replication-controller-6800/my-hostname-private-b04e17b2-1cb0-487a-913a-0bddbe20b172-bkrhh\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:28.189747      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8809/externalname-service-w4jqf\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:28.190762      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8809/externalname-service-4d5sz\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:28.861111      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-6380/hostexec-nodes-us-central1-a-tdxw-68g8f\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:30.610101      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8240/hostpathsymlink-injector\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:32.487306      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-3257/local-client\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:34.212525      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-8809/execpod5kfmf\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:34.669670      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9018/nodeport-update-service-b78d8\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:34.688114      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9018/nodeport-update-service-gnmhg\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:34.848130      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-415/pvc-volume-tester-vlbhv\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:35.631853      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8181/hostexec-nodes-us-central1-a-50vm-qptjh\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:35.689084      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-1822/sample-webhook-deployment-5f8b6c9658-9wg2j\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:36.701963      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-7105/hostport\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:39.763244      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"subpath-2198/pod-subpath-test-configmap-sjng\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:39.930547      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-2281/alpine-nnp-false-8a22998a-815d-48c5-8b82-5f5922f64a13\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:40.099744      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-1362/downwardapi-volume-fb580f31-18a2-484f-b8d7-106899b8b145\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:41.114412      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-6380/local-injector\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:42.085640      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-2769/test-container-pod\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:42.888351      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-685/nodeport-test-6f9l2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:42.899726      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-685/nodeport-test-nxgpj\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:43.203420      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8240/hostpathsymlink-client\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:43.710656      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9018/execpodwsmbv\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:44.247216      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-7351/pod-projected-secrets-117e4206-6fb4-4462-95d7-d74b188e2ea9\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:46.298234      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-2179/hostexec\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:46.496777      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-216/fail-once-local-nwgfw\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:46.519377      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-216/fail-once-local-25m7d\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:46.728074      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-7105/hostexec-nodes-us-central1-a-nk1s-9hbsc\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:47.787960      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-6380/local-client\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:48.476403      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-656/ss-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:48.790429      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-4052/hairpin\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:48.791309      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"port-forwarding-8702/pfpod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:48.910333      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-685/execpodtxfx7\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:49.577527      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9696/httpd\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:49.949881      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1642/hostexec-nodes-us-central1-a-50vm-d5qf6\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:50.880260      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"sctp-7105/hostexec-nodes-us-central1-a-nk1s-8zczr\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:51.217979      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1661/netserver-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:51.253357      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1661/netserver-1\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:51.270030      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1661/netserver-2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:51.282632      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1661/netserver-3\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:19:52.143979      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-216/fail-once-local-8vv8n\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:53.081834      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"replication-controller-3270/pod-adoption\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:53.151279      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-216/fail-once-local-rgvxc\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:53.654271      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-8313/pod-projected-configmaps-99f08a0c-01bd-4c1a-b2cf-2cc094596da1\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:19:56.292983      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8181/local-injector\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:56.342649      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1642/pod-subpath-test-preprovisionedpv-lltq\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:57.763209      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5341/hostexec-nodes-us-central1-a-50vm-dn8z4\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:19:58.282921      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-8185/pod-projected-configmaps-7504d5c6-90bf-4885-87f4-4c90b63d7017\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:00.136354      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"cronjob-6825/forbid-27599480-vtrw7\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:00.574555      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5366-5490/csi-mockplugin-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:00.891572      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7731-401/csi-mockplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:00.914469      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7731-401/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:00.969977      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5339/inline-volume-sc874\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-sc874-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:20:02.467744      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"port-forwarding-9293/pfpod\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:02.939934      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-596-8658/csi-mockplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:02.946471      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-596-8658/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:03.433446      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-616/downwardapi-volume-00a85d8e-16c1-472d-b81d-66f9be65686b\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:03.617112      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5339/inline-volume-tester-ln5r7\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-ln5r7-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:20:03.642318      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-5339-6139/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:03.711469      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-4349/hostexec-nodes-us-central1-a-tdxw-4k79c\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:05.542969      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5339/inline-volume-tester-ln5r7\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:20:06.255371      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-656/ss-1\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:07.544662      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5339/inline-volume-tester-ln5r7\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:20:07.759833      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-8225/busybox-9b67afd6-26af-43af-aa7c-336aebe1b85a\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:08.001063      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7731/pvc-volume-tester-xpz4s\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:08.782610      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-8007/busybox-f46389d6-dec3-43a6-b6a0-0e40510ec84c\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:09.958453      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-596/pvc-volume-tester-8f7v7\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:10.430333      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-1642/pod-subpath-test-preprovisionedpv-lltq\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:11.557097      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-5339/inline-volume-tester-ln5r7\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:11.917938      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-4349/pod-subpath-test-preprovisionedpv-h458\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:11.972139      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5341/pod-66a3ac8e-2d29-494b-83c9-15887123c08a\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:20:12.077753      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3651/pod1\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:12.119598      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5366/pvc-volume-tester-n2llb\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:12.221288      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"init-container-5784/pod-init-646431be-0dfc-4d30-85d3-3328ed1f54c9\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:13.474623      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"nettest-1661/test-container-pod\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:14.057583      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-9168/httpd\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:14.111403      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3651/pod2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:16.161260      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-3651/execpodqwsbl\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:17.051504      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-8181/local-client\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:18.466341      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"containers-6463/client-containers-a4b1fed9-bc01-48b3-829c-441fcbfdc257\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:20.222993      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-656/ss-2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:20.593700      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-4543/test-pod\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:20.812553      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-6984/pod-e21a113a-c0d2-4d9b-8ef4-3cd7c7eb3244\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:22.197661      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5341/pod-0ba8c097-911d-4a3c-ac85-dff4363e5565\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:20:25.069468      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-k22fl\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:25.092942      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-krtg2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:25.097386      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-8jhwq\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:25.137143      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-4pjb4\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:25.137922      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-brs78\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:25.139802      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-cxmbr\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:25.144413      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-t2mpt\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:25.172832      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-7s8sv\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:25.183485      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-zwztw\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:25.184524      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-l5ntq\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:26.999078      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2754-6220/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:27.358733      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3832/hostexec-nodes-us-central1-a-nk1s-xmnkg\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:29.049772      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-3331/hostexec-nodes-us-central1-a-tdxw-5p48x\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:29.848784      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-330/test-cleanup-controller-vj97m\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:31.531495      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6417/hostexec-nodes-us-central1-a-50vm-wg854\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:32.812760      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-1949/adopt-release-2hxxq\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:32.827600      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-1949/adopt-release-72b4t\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:32.998634      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2754/pod-subpath-test-dynamicpv-gmxw\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:34.893756      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-330/test-cleanup-deployment-6f47bf8b9f-hsbhg\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:35.057753      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"configmap-4277/pod-configmaps-cade84b5-2215-4cbc-acb8-47f65fba0b3e\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:36.817926      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/service-headless-xg2wm\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:36.886987      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/service-headless-hxgbs\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:36.933053      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/service-headless-f7262\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:37.458937      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6385-7488/csi-mockplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:38.918894      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"job-1949/adopt-release-zv2tq\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:39.002746      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-5455/downwardapi-volume-f2342945-ccc9-4f48-a3c5-fd2b9c89fbc5\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:39.172690      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-9gtgl\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:39.216214      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-t8rlj\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:39.216281      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-m5m8p\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:39.302388      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-w8wth\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:39.321211      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-b4d97\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:39.880813      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/service-headless-toggled-cbz6l\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:39.902816      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/service-headless-toggled-gds56\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:39.903957      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/service-headless-toggled-crxt6\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:40.373591      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-5500/inline-volume-tester-tkps8\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:40.430328      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-5500-8384/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:41.254387      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-x2jz2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.288473      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-snrd6\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.306942      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-hw2r7\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.308687      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-kx549\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.381808      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-ck9mr\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.382438      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-mnv6l\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.382819      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-96gvz\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.382892      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-nphjm\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.382940      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-nzk8w\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.403133      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-qp6th\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.438606      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-nmv57\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.440202      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-lvpv7\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.440257      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-dd6t8\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.458807      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-65fkt\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.511646      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-6hfdj\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.516428      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-vqbc8\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.516634      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-n84b8\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.516427      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-sdrn7\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.516949      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-68c48f9ff9-fg9d6\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.580576      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"deployment-3931/webserver-deployment-5fd5c5f98f-scfkv\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:41.701804      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5076/hostexec-nodes-us-central1-a-m5w1-t5x42\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:41.707124      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-3331/pod-449321a6-72a6-4527-83ec-c52476a019a3\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:20:41.791429      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3832/pod-subpath-test-preprovisionedpv-l5tz\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:41.816387      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-6417/pod-subpath-test-preprovisionedpv-zr7b\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:43.561750      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-7471/ss-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:47.003968      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-3338/ss-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:47.135144      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4519/ss2-0\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:48.539246      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4519/ss2-1\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:51.737609      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-3331/hostexec-nodes-us-central1-a-tdxw-hcdgj\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:51.927878      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:52.823205      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-6452/pod-projected-secrets-6687040e-3394-4d41-9690-9e48e69fdbe5\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:53.572998      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-7471/ss-1\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:53.949067      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/verify-service-up-exec-pod-486s6\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:54.022654      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6385/pvc-volume-tester-5npfq\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:54.082395      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-1647/hostexec-nodes-us-central1-a-nk1s-xk5k8\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:56.307653      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5076/pod-subpath-test-preprovisionedpv-6gj7\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:56.787154      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/verify-service-down-host-exec-pod\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:58.466804      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-5406/pod-subpath-test-inlinevolume-qsqc\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:20:58.600519      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2939/hostexec-nodes-us-central1-a-m5w1-95l7v\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:20:59.516889      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4519/ss2-2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:00.148746      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"cronjob-5181/replace-27599481-n5nlg\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:01.022663      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/verify-service-down-host-exec-pod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:02.721317      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2939/pod-e78e6ea3-c745-402d-b386-fee32cf1a8fc\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 4 node(s) didn't match Pod's node affinity/selector, 4 node(s) had volume node affinity conflict. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:21:03.128884      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-8558/pod-683c5298-dd8e-4324-ab9b-9b901e9b5d27\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:04.610239      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2939/pod-e78e6ea3-c745-402d-b386-fee32cf1a8fc\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-krcj7\\\" not found. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nE0623 07:21:04.618124      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-e78e6ea3-c745-402d-b386-fee32cf1a8fc.16fb2ee89e347016\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-2939\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-2939\", Name:\"pod-e78e6ea3-c745-402d-b386-fee32cf1a8fc\", UID:\"810fd735-0fc9-4b3b-9c80-226bb6e8ad1b\", APIVersion:\"v1\", ResourceVersion:\"18726\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-krcj7\\\" not found. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 21, 4, 610316310, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 21, 4, 610316310, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-e78e6ea3-c745-402d-b386-fee32cf1a8fc.16fb2ee89e347016\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2939 because it is being terminated' (will not retry!)\nI0623 07:21:04.655241      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"init-container-6197/pod-init-c047ba51-75e4-4b17-a5bd-a5dced54a3bd\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:05.243113      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/verify-service-up-host-exec-pod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:06.611777      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2939/pod-e78e6ea3-c745-402d-b386-fee32cf1a8fc\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-krcj7\\\" not found. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nE0623 07:21:06.631930      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-e78e6ea3-c745-402d-b386-fee32cf1a8fc.16fb2ee89e347016\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-2939\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-2939\", Name:\"pod-e78e6ea3-c745-402d-b386-fee32cf1a8fc\", UID:\"810fd735-0fc9-4b3b-9c80-226bb6e8ad1b\", APIVersion:\"v1\", ResourceVersion:\"18823\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-krcj7\\\" not found. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 21, 4, 610316310, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 21, 6, 614963379, time.Local), Count:2, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-e78e6ea3-c745-402d-b386-fee32cf1a8fc.16fb2ee89e347016\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2939 because it is being terminated' (will not retry!)\nI0623 07:21:06.961199      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9150/hostexec-nodes-us-central1-a-tdxw-g2qbr\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:08.025039      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-4737/sample-webhook-deployment-5f8b6c9658-zf8n4\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:08.266319      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"runtimeclass-9663/test-runtimeclass-runtimeclass-9663-unconfigured-handler-dgxkb\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:09.264074      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/verify-service-up-exec-pod-2z25c\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:09.591871      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-probe-2442/test-webserver-59a668b4-9f70-44a0-9c31-a1b746052949\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:10.289162      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-9245/pod-handle-http-request\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:10.546435      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-1647/local-injector\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:10.599084      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-9561/downward-api-d6a14d98-b52b-4ddc-a3d3-4939fb523f99\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:10.720143      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8414-477/csi-mockplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:10.722947      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8414-477/csi-mockplugin-attacher-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:10.997194      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"downward-api-4353/downwardapi-volume-25e34429-2d30-4f2e-886b-1961e08fd06e\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:11.271765      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"port-forwarding-9122/pfpod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:11.796168      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-3945/labelsupdatee5f7e291-0398-4b2e-9cf6-cf493a34e783\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:12.608066      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-3338/ss-1\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:13.386533      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9150/pod-120749ba-908b-43d4-82c8-7b312f77cf0d\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:21:13.637816      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-3334/httpd\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:13.766344      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-5451/hostpath-injector\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:13.767011      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-8868/pod-subpath-test-inlinevolume-xnb7\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:14.322462      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-9245/pod-with-prestop-exec-hook\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:15.143405      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3191/pod-subpath-test-inlinevolume-q7sj\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:16.791102      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"emptydir-9035/pod-46a447ab-1ae2-4b51-963e-f06393099e12\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:18.184230      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-9773/verify-service-down-host-exec-pod\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:18.957727      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4519/ss2-2\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:19.424811      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-1647/local-client\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:20.940568      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"port-forwarding-1379/pfpod\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:21.584274      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-6211/webserver-pod\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:21.625982      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9150/pod-f1a410e9-6a08-4d4b-ad75-f93b4a9b0e07\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:21:22.344135      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8414/pvc-volume-tester-29p9t\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:22.530506      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7043/hostexec-nodes-us-central1-a-m5w1-df58w\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:22.582518      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-4286/hostexec-nodes-us-central1-a-m5w1-qh2t8\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:23.972577      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3732/hostexec-nodes-us-central1-a-50vm-js288\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:24.109546      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"security-context-test-5893/busybox-privileged-true-c8d37535-45a7-4b3b-8ff6-299ab17ffe3a\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:24.520274      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-2026/agnhost-primary-b2tlt\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:26.803405      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-4286/pod-3f105958-b749-410f-bcc0-7fca6e5a940c\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=1\nI0623 07:21:27.352026      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4519/ss2-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:27.653401      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"services-6211/pause-pod-1\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:28.733178      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"projected-8774/downwardapi-volume-51af53c8-4f16-43c9-b4b1-231601ba85f7\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:28.754134      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-7043/pod-b8474ff8-c11e-4887-a774-47718eb57e65\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 4 node(s) didn't match Pod's node affinity/selector, 4 node(s) had volume node affinity conflict. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:21:28.871645      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volumemode-4286/hostexec-nodes-us-central1-a-m5w1-knbr5\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:29.317566      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-expand-1187-5209/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:29.451750      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-743/pod-secrets-44120ea8-36c9-423d-a6d7-7773c59f4da4\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:29.793551      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-3334/success\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:30.633262      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-7043/pod-b8474ff8-c11e-4887-a774-47718eb57e65\" err=\"0/5 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 4 node(s) didn't match Pod's node affinity/selector, 4 node(s) had volume node affinity conflict. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:21:30.667561      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"containers-6209/client-containers-4cea648c-9046-4023-8fd0-0644cb423fc6\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:31.070712      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7608/rs-8nmrt\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:31.134544      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7608/rs-wkbjw\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:31.137196      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7608/rs-hsrrd\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:31.161758      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7608/rs-hxzc5\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:31.180681      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7608/rs-vlzkl\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:31.181385      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7608/rs-79sd5\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:31.181957      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7608/rs-vxvk7\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:31.205183      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7608/rs-45vgw\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:31.205521      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7608/rs-l7bht\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:31.211045      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"disruption-7608/rs-6ncbk\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:32.634382      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-7043/pod-b8474ff8-c11e-4887-a774-47718eb57e65\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-w6nb9\\\" not found. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nE0623 07:21:32.645499      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-b8474ff8-c11e-4887-a774-47718eb57e65.16fb2eef2497a415\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-7043\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-7043\", Name:\"pod-b8474ff8-c11e-4887-a774-47718eb57e65\", UID:\"30398a30-fae2-4639-a8ea-767f585f1562\", APIVersion:\"v1\", ResourceVersion:\"19854\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-w6nb9\\\" not found. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 21, 32, 634768405, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 21, 32, 634768405, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-b8474ff8-c11e-4887-a774-47718eb57e65.16fb2eef2497a415\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-7043 because it is being terminated' (will not retry!)\nI0623 07:21:32.908338      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"volume-5451/hostpath-client\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:33.940326      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"secrets-8080/pod-secrets-c61ec956-8c6f-4e6d-b22e-3c76d7d1321d\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:34.963745      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-2980/hostexec-nodes-us-central1-a-50vm-hnl5c\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:35.448871      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"subpath-1774/pod-subpath-test-secret-2qp5\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:37.184657      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"webhook-1190/sample-webhook-deployment-5f8b6c9658-t8x2r\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:37.788810      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3994-1940/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:37.952774      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"crd-webhook-1063/sample-crd-conversion-webhook-deployment-646fc49456-lnrfg\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.271309      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-177-6162/csi-mockplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:38.304134      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-5qzp9\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.347689      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-g9nff\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.348684      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-k7b92\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.348841      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-8hdqf\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.367782      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-n9c2f\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.378123      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-gnflm\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.393666      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-zchrb\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.399472      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-xsfzl\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.427861      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-l6cbq\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.434679      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-pw9n9\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.450429      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-5vhbk\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.474555      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-pkltc\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.474676      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-k6dp7\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.474747      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-gzn55\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.474825      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-bpph5\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.475042      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-k8v4w\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.515670      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-gcz6q\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.523529      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-cvjvh\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.524134      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-rdmdx\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.544818      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-c52lq\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.544974      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-l9562\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.545145      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-srwcd\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.545249      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-59n7z\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.545504      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-mscmb\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.545549      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-tlck8\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.545611      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-dtskd\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.548480      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-fw2s2\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.548589      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-cxmrx\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.582405      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-bz48f\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.622619      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-24ppw\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.695077      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-w8lq6\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.773266      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-jjlp2\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.817099      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-znjlf\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.859239      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-nx6ts\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.907216      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-zdscs\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.969671      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-qgc7f\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:38.997866      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-r29k7\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.054404      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-clrk8\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.134883      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-qvc4g\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.170042      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-kgpxz\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.223280      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-nc5l4\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nE0623 07:21:39.235759      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"rs-gnzfd\\\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"disruption-7608/rs-gnzfd\"\nI0623 07:21:39.235929      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"disruption-7608/rs-gnzfd\"\nE0623 07:21:39.237129      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"rs-gnzfd\\\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated\" pod=\"disruption-7608/rs-gnzfd\"\nE0623 07:21:39.251137      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-gnzfd.16fb2ef0ae242f88\", GenerateName:\"\", Namespace:\"disruption-7608\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-7608\", Name:\"rs-gnzfd\", UID:\"9e145134-3fb5-4f16-a9c7-86f8c20ea21e\", APIVersion:\"v1\", ResourceVersion:\"20523\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"rs-gnzfd\\\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 21, 39, 237425032, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 21, 39, 237425032, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-gnzfd.16fb2ef0ae242f88\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated' (will not retry!)\nI0623 07:21:39.260888      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-29sxz\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.314735      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-l9kf6\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.355567      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-bzsfq\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.434673      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-b4xvc\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.495569      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-j9745\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.514277      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-djfgh\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.558591      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-zv6xf\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.605206      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-nqtkj\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.643407      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-dzjwl\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.697195      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-pgmtr\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.757359      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-7rm92\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.802663      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-vjvbz\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.869683      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-6d45f\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.878076      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-402-1104/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:39.922505      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-ldlt8\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:39.996600      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-lsl2m\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.052539      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-24qst\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.098400      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-2r2cr\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.145165      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-x8h4v\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.200638      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-kzm4l\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.254915      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-cgfv7\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.329576      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-z7jj4\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.352511      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-266rc\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.462659      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-2tbk4\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.503816      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-2rnm2\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.564717      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-ljtpj\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.604564      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-xkbw8\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.654329      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-v5824\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nE0623 07:21:40.675519      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"rs-gnzfd\\\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"disruption-7608/rs-gnzfd\"\nI0623 07:21:40.675682      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"disruption-7608/rs-gnzfd\"\nE0623 07:21:40.675845      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"rs-gnzfd\\\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated\" pod=\"disruption-7608/rs-gnzfd\"\nE0623 07:21:40.700433      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-gnzfd.16fb2ef0ae242f88\", GenerateName:\"\", Namespace:\"disruption-7608\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-7608\", Name:\"rs-gnzfd\", UID:\"9e145134-3fb5-4f16-a9c7-86f8c20ea21e\", APIVersion:\"v1\", ResourceVersion:\"20529\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"rs-gnzfd\\\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 21, 39, 237425032, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 21, 40, 675950771, time.Local), Count:2, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-gnzfd.16fb2ef0ae242f88\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated' (will not retry!)\nI0623 07:21:40.703846      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-47n99\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.748640      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-lfrhf\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.799074      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-6ksh7\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.851184      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-lzf5m\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.902624      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-zp6rv\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.948181      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-dhjjg\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:40.997531      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-pfmsf\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.045756      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-6hsfk\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.097288      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-5nvkc\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.145549      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-b6q28\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.228599      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-r76cf\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.290021      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-wkq9q\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.341360      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-w79tc\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.369147      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-9k5xw\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.403581      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-tcsvd\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.453371      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-2phxf\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.498976      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-klwck\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.550744      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-vxklx\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.595589      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-825f9\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.654239      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-t2tfs\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.700136      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-rwv89\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.777045      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-s8jmd\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.829156      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-t6xr4\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.856298      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-7ht9b\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.908890      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-m2hwr\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.947954      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-bpcpm\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:41.997316      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-fxndr\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:42.050791      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-bvmc9\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:42.097513      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-s6648\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:42.150545      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-dt977\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:42.196700      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-wzrcx\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:42.245977      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"gc-3120/simpletest.rc-n2wfr\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:42.441122      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3732/pod-subpath-test-preprovisionedpv-jxc7\" node=\"nodes-us-central1-a-50vm\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:42.608376      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-3338/ss-2\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nE0623 07:21:43.658113      10 framework.go:1046] \"Failed running Bind plugin\" err=\"pods \\\"rs-gnzfd\\\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated\" plugin=\"DefaultBinder\" pod=\"disruption-7608/rs-gnzfd\"\nI0623 07:21:43.658158      10 schedule_one.go:794] \"Failed to bind pod\" pod=\"disruption-7608/rs-gnzfd\"\nE0623 07:21:43.658841      10 scheduler.go:376] \"Error scheduling pod; retrying\" err=\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"rs-gnzfd\\\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated\" pod=\"disruption-7608/rs-gnzfd\"\nE0623 07:21:43.688504      10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-gnzfd.16fb2ef0ae242f88\", GenerateName:\"\", Namespace:\"disruption-7608\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-7608\", Name:\"rs-gnzfd\", UID:\"9e145134-3fb5-4f16-a9c7-86f8c20ea21e\", APIVersion:\"v1\", ResourceVersion:\"20529\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"binding rejected: running Bind plugin \\\"DefaultBinder\\\": pods \\\"rs-gnzfd\\\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 7, 21, 39, 237425032, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 7, 21, 43, 659912720, time.Local), Count:3, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-gnzfd.16fb2ef0ae242f88\" is forbidden: unable to create new content in namespace disruption-7608 because it is being terminated' (will not retry!)\nI0623 07:21:44.049520      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"statefulset-4519/ss2-2\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:47.999630      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"provisioning-3994/pod-subpath-test-dynamicpv-mp5t\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:48.801632      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-5857/httpd\" node=\"nodes-us-central1-a-nk1s\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:49.385161      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8367/inline-volume-99zvk\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-99zvk-my-volume\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:21:51.761874      10 scheduler.go:360] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8367/inline-volume-tester-tqbb4\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-tqbb4-my-volume-0\\\". preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.\"\nI0623 07:21:51.792120      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"ephemeral-8367-5954/csi-hostpathplugin-0\" node=\"nodes-us-central1-a-tdxw\" evaluatedNodes=1 feasibleNodes=1\nI0623 07:21:51.934446      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"kubectl-7853/httpd-deployment-79bc68c759-6cngg\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:52.319015      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-4597/deployment-shared-map-item-removal-6c468f5898-b696n\" node=\"nodes-us-central1-a-m5w1\" evaluatedNodes=5 feasibleNodes=4\nI0623 07:21:52.342463      10 schedule_one.go:263] \"Successfully bound pod to node\" pod=\"apply-4597/deployment-shared-map-item-removal-6c468f5898-2qmpg\" node=\"nodes-us-ce