This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-06 23:39
Elapsed32m52s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 159 lines ...
.done.
WARNING: No host aliases were added to your SSH configs because you do not have any running instances. Try running this command again after running some instances.
I1006 23:40:09.864220    4736 up.go:43] Cleaning up any leaked resources from previous cluster
I1006 23:40:09.864264    4736 dumplogs.go:40] /logs/artifacts/8409e2f7-26fe-11ec-8be6-fe96b6157dda/kops toolbox dump --name e2e-4e8fce5b36-0a91d.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh2114889162/key --ssh-user prow
I1006 23:40:09.883265    4772 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1006 23:40:09.883367    4772 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-4e8fce5b36-0a91d.k8s.local" not found
W1006 23:40:10.128237    4736 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1006 23:40:10.128293    4736 down.go:48] /logs/artifacts/8409e2f7-26fe-11ec-8be6-fe96b6157dda/kops delete cluster --name e2e-4e8fce5b36-0a91d.k8s.local --yes
I1006 23:40:10.152544    4783 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1006 23:40:10.152655    4783 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-4e8fce5b36-0a91d.k8s.local" not found
I1006 23:40:10.375260    4736 gcs.go:51] gsutil ls -b -p k8s-boskos-gce-project-06 gs://k8s-boskos-gce-project-06-state-84
I1006 23:40:11.979239    4736 gcs.go:70] gsutil mb -p k8s-boskos-gce-project-06 gs://k8s-boskos-gce-project-06-state-84
Creating gs://k8s-boskos-gce-project-06-state-84/...
I1006 23:40:13.819062    4736 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/06 23:40:13 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1006 23:40:13.828282    4736 http.go:37] curl https://ip.jsb.workers.dev
I1006 23:40:13.949731    4736 up.go:144] /logs/artifacts/8409e2f7-26fe-11ec-8be6-fe96b6157dda/kops create cluster --name e2e-4e8fce5b36-0a91d.k8s.local --cloud gce --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.2 --ssh-public-key /tmp/kops-ssh2114889162/key.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --channel=alpha --networking=kubenet --container-runtime=docker --admin-access 35.232.169.255/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west3-a --master-size e2-standard-2 --project k8s-boskos-gce-project-06 --vpc e2e-4e8fce5b36-0a91d
I1006 23:40:13.968898    5071 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1006 23:40:13.969004    5071 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1006 23:40:13.993246    5071 create_cluster.go:838] Using SSH public key: /tmp/kops-ssh2114889162/key.pub
W1006 23:40:14.290615    5071 new_cluster.go:355] VMs will be configured to use the GCE default compute Service Account! This is an anti-pattern
... skipping 20 lines ...
W1006 23:40:18.851826    5071 vfs_castore.go:377] CA private key was not found
I1006 23:40:18.940350    5071 keypair.go:213] Issuing new certificate: "service-account"
I1006 23:40:18.947074    5071 keypair.go:213] Issuing new certificate: "kubernetes-ca"
I1006 23:40:28.038164    5071 executor.go:111] Tasks: 39 done / 65 total; 19 can run
I1006 23:40:28.249138    5071 keypair.go:213] Issuing new certificate: "kubelet"
I1006 23:40:28.249976    5071 keypair.go:213] Issuing new certificate: "kube-proxy"
W1006 23:40:29.398308    5071 executor.go:139] error running task "FirewallRule/node-to-master-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398356    5071 executor.go:139] error running task "FirewallRule/node-to-node-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398367    5071 executor.go:139] error running task "FirewallRule/ssh-external-to-node-ipv6-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398376    5071 executor.go:139] error running task "FirewallRule/ssh-external-to-master-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398383    5071 executor.go:139] error running task "FirewallRule/https-api-ipv6-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398391    5071 executor.go:139] error running task "FirewallRule/ssh-external-to-node-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398397    5071 executor.go:139] error running task "FirewallRule/nodeport-external-to-node-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398404    5071 executor.go:139] error running task "FirewallRule/master-to-master-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398410    5071 executor.go:139] error running task "FirewallRule/nodeport-external-to-node-ipv6-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398418    5071 executor.go:139] error running task "FirewallRule/https-api-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398425    5071 executor.go:139] error running task "FirewallRule/master-to-node-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398431    5071 executor.go:139] error running task "FirewallRule/pod-cidrs-to-node-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:40:29.398437    5071 executor.go:139] error running task "FirewallRule/ssh-external-to-master-ipv6-e2e-4e8fce5b36-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
I1006 23:40:29.398464    5071 executor.go:111] Tasks: 45 done / 65 total; 16 can run
W1006 23:42:07.112302    5071 executor.go:139] error running task "FirewallRule/master-to-master-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:42:07.112353    5071 executor.go:139] error running task "FirewallRule/nodeport-external-to-node-ipv6-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:42:07.112367    5071 executor.go:139] error running task "FirewallRule/https-api-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:42:07.112375    5071 executor.go:139] error running task "FirewallRule/master-to-node-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:42:07.112381    5071 executor.go:139] error running task "FirewallRule/pod-cidrs-to-node-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:42:07.112389    5071 executor.go:139] error running task "FirewallRule/ssh-external-to-master-ipv6-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:42:07.112396    5071 executor.go:139] error running task "FirewallRule/node-to-master-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:42:07.112401    5071 executor.go:139] error running task "FirewallRule/node-to-node-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:42:07.112407    5071 executor.go:139] error running task "FirewallRule/ssh-external-to-node-ipv6-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:42:07.112417    5071 executor.go:139] error running task "FirewallRule/ssh-external-to-master-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:42:07.112423    5071 executor.go:139] error running task "FirewallRule/https-api-ipv6-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
W1006 23:42:07.112429    5071 executor.go:139] error running task "FirewallRule/ssh-external-to-node-e2e-4e8fce5b36-0a91d-k8s-local" (8m20s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-06/global/networks/e2e-4e8fce5b36-0a91d' is not ready, resourceNotReady
I1006 23:42:07.112451    5071 executor.go:111] Tasks: 49 done / 65 total; 15 can run
I1006 23:42:24.492736    5071 executor.go:111] Tasks: 64 done / 65 total; 1 can run
I1006 23:42:37.543931    5071 executor.go:111] Tasks: 65 done / 65 total; 0 can run
I1006 23:42:37.598784    5071 update_cluster.go:326] Exporting kubeconfig for cluster
kOps has set your kubectl context to e2e-4e8fce5b36-0a91d.k8s.local

... skipping 8 lines ...

I1006 23:42:38.080406    4736 up.go:181] /logs/artifacts/8409e2f7-26fe-11ec-8be6-fe96b6157dda/kops validate cluster --name e2e-4e8fce5b36-0a91d.k8s.local --count 10 --wait 15m0s
I1006 23:42:38.103398    5089 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1006 23:42:38.103525    5089 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-4e8fce5b36-0a91d.k8s.local

W1006 23:43:08.397849    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": dial tcp 34.106.187.92:443: i/o timeout
W1006 23:43:18.421487    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": dial tcp 34.106.187.92:443: connect: connection refused
W1006 23:43:28.445382    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": dial tcp 34.106.187.92:443: connect: connection refused
W1006 23:43:38.468423    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": dial tcp 34.106.187.92:443: connect: connection refused
W1006 23:43:48.492645    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": dial tcp 34.106.187.92:443: connect: connection refused
W1006 23:43:58.516004    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": dial tcp 34.106.187.92:443: connect: connection refused
W1006 23:44:08.540485    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": dial tcp 34.106.187.92:443: connect: connection refused
W1006 23:44:18.566388    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": dial tcp 34.106.187.92:443: connect: connection refused
W1006 23:44:28.590898    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": dial tcp 34.106.187.92:443: connect: connection refused
W1006 23:44:38.615966    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": dial tcp 34.106.187.92:443: connect: connection refused
W1006 23:44:48.638857    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": dial tcp 34.106.187.92:443: connect: connection refused
W1006 23:45:08.664993    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": net/http: TLS handshake timeout
W1006 23:45:28.689881    5089 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.187.92/api/v1/nodes": net/http: TLS handshake timeout
I1006 23:45:39.117026    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3

... skipping 6 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f" has not yet joined cluster
Node	master-us-west3-a-8lvv														node "master-us-west3-a-8lvv" of role "master" is not ready

Validation Failed
W1006 23:45:39.873433    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:45:50.208835    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 7 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f" has not yet joined cluster
Node	master-us-west3-a-8lvv														node "master-us-west3-a-8lvv" of role "master" is not ready

Validation Failed
W1006 23:45:50.909385    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:46:01.304337    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 8 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f" has not yet joined cluster
Node	master-us-west3-a-8lvv														node "master-us-west3-a-8lvv" of role "master" is not ready
Pod	kube-system/kube-proxy-master-us-west3-a-8lvv											system-node-critical pod "kube-proxy-master-us-west3-a-8lvv" is pending

Validation Failed
W1006 23:46:02.000927    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:46:12.429347    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 14 lines ...
Pod	kube-system/dns-controller-6556d9c57b-vz9c6											system-cluster-critical pod "dns-controller-6556d9c57b-vz9c6" is pending
Pod	kube-system/etcd-manager-events-master-us-west3-a-8lvv										system-cluster-critical pod "etcd-manager-events-master-us-west3-a-8lvv" is pending
Pod	kube-system/etcd-manager-main-master-us-west3-a-8lvv										system-cluster-critical pod "etcd-manager-main-master-us-west3-a-8lvv" is pending
Pod	kube-system/kops-controller-p8wpg												system-cluster-critical pod "kops-controller-p8wpg" is pending
Pod	kube-system/kube-scheduler-master-us-west3-a-8lvv										system-cluster-critical pod "kube-scheduler-master-us-west3-a-8lvv" is pending

Validation Failed
W1006 23:46:13.198461    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:46:23.642125    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 9 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f" has not yet joined cluster
Node	master-us-west3-a-8lvv														master "master-us-west3-a-8lvv" is missing kube-apiserver pod
Pod	kube-system/coredns-5dc785954d-s9n4k												system-cluster-critical pod "coredns-5dc785954d-s9n4k" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-ngdl8											system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-ngdl8" is pending

Validation Failed
W1006 23:46:24.324891    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:46:34.649987    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 9 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f" has not yet joined cluster
Node	master-us-west3-a-8lvv														master "master-us-west3-a-8lvv" is missing kube-apiserver pod
Pod	kube-system/coredns-5dc785954d-s9n4k												system-cluster-critical pod "coredns-5dc785954d-s9n4k" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-ngdl8											system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-ngdl8" is pending

Validation Failed
W1006 23:46:35.354953    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:46:45.729597    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 8 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f	machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-s9n4k												system-cluster-critical pod "coredns-5dc785954d-s9n4k" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-ngdl8											system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-ngdl8" is pending

Validation Failed
W1006 23:46:46.469338    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:46:56.870631    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 16 lines ...
Pod	kube-system/coredns-autoscaler-84d4cfd89c-ngdl8	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-ngdl8" is pending
Pod	kube-system/metadata-proxy-v0.12-45zz6		system-node-critical pod "metadata-proxy-v0.12-45zz6" is pending
Pod	kube-system/metadata-proxy-v0.12-6jpdj		system-node-critical pod "metadata-proxy-v0.12-6jpdj" is pending
Pod	kube-system/metadata-proxy-v0.12-nl84f		system-node-critical pod "metadata-proxy-v0.12-nl84f" is pending
Pod	kube-system/metadata-proxy-v0.12-snrf6		system-node-critical pod "metadata-proxy-v0.12-snrf6" is pending

Validation Failed
W1006 23:46:57.669701    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:47:08.054174    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 15 lines ...
Pod	kube-system/coredns-autoscaler-84d4cfd89c-ngdl8	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-ngdl8" is pending
Pod	kube-system/metadata-proxy-v0.12-45zz6		system-node-critical pod "metadata-proxy-v0.12-45zz6" is pending
Pod	kube-system/metadata-proxy-v0.12-6jpdj		system-node-critical pod "metadata-proxy-v0.12-6jpdj" is pending
Pod	kube-system/metadata-proxy-v0.12-nl84f		system-node-critical pod "metadata-proxy-v0.12-nl84f" is pending
Pod	kube-system/metadata-proxy-v0.12-snrf6		system-node-critical pod "metadata-proxy-v0.12-snrf6" is pending

Validation Failed
W1006 23:47:08.789901    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:47:19.134759    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 10 lines ...
KIND	NAME					MESSAGE
Pod	kube-system/coredns-5dc785954d-s9n4k	system-cluster-critical pod "coredns-5dc785954d-s9n4k" is not ready (coredns)
Pod	kube-system/metadata-proxy-v0.12-6jpdj	system-node-critical pod "metadata-proxy-v0.12-6jpdj" is pending
Pod	kube-system/metadata-proxy-v0.12-nl84f	system-node-critical pod "metadata-proxy-v0.12-nl84f" is pending
Pod	kube-system/metadata-proxy-v0.12-snrf6	system-node-critical pod "metadata-proxy-v0.12-snrf6" is pending

Validation Failed
W1006 23:47:19.787137    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:47:30.025549    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 9 lines ...
VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-5dc785954d-s9n4k	system-cluster-critical pod "coredns-5dc785954d-s9n4k" is not ready (coredns)
Pod	kube-system/metadata-proxy-v0.12-6jpdj	system-node-critical pod "metadata-proxy-v0.12-6jpdj" is pending
Pod	kube-system/metadata-proxy-v0.12-snrf6	system-node-critical pod "metadata-proxy-v0.12-snrf6" is pending

Validation Failed
W1006 23:47:30.772422    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:47:41.104076    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 8 lines ...

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-5dc785954d-s9n4k	system-cluster-critical pod "coredns-5dc785954d-s9n4k" is not ready (coredns)
Pod	kube-system/metadata-proxy-v0.12-snrf6	system-node-critical pod "metadata-proxy-v0.12-snrf6" is pending

Validation Failed
W1006 23:47:41.804059    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:47:52.133601    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 71 lines ...
nodes-us-west3-a-xm8f	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-nodes-us-west3-a-v32d	system-node-critical pod "kube-proxy-nodes-us-west3-a-v32d" is pending

Validation Failed
W1006 23:48:37.036409    5089 validate_cluster.go:232] (will retry): cluster not yet healthy
I1006 23:49:00.837682    5089 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 180 lines ...
===================================
Random Seed: 1633564258 - Will randomize all specs
Will run 6432 specs

Running in parallel across 25 nodes

Oct  6 23:51:14.547: INFO: lookupDiskImageSources: gcloud error with [[]string{"instance-groups", "list-instances", "", "--format=get(instance)"}]; err:exit status 1
Oct  6 23:51:14.547: INFO:  > ERROR: (gcloud.compute.instance-groups.list-instances) could not parse resource []
Oct  6 23:51:14.547: INFO:  > 
Oct  6 23:51:14.547: INFO: Cluster image sources lookup failed: exit status 1

Oct  6 23:51:14.547: INFO: >>> kubeConfig: /root/.kube/config
Oct  6 23:51:14.550: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Oct  6 23:51:14.651: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct  6 23:51:14.755: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct  6 23:51:14.756: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
... skipping 698 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 404 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:24.119: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 26 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.701 seconds]
[sig-node] Sysctls [LinuxOnly] [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  6 23:51:15.365: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e765597-ecc4-43b8-9052-6123244c8d5c" in namespace "projected-2058" to be "Succeeded or Failed"
Oct  6 23:51:15.419: INFO: Pod "downwardapi-volume-1e765597-ecc4-43b8-9052-6123244c8d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 54.193347ms
Oct  6 23:51:17.449: INFO: Pod "downwardapi-volume-1e765597-ecc4-43b8-9052-6123244c8d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084101031s
Oct  6 23:51:19.476: INFO: Pod "downwardapi-volume-1e765597-ecc4-43b8-9052-6123244c8d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110423147s
Oct  6 23:51:21.501: INFO: Pod "downwardapi-volume-1e765597-ecc4-43b8-9052-6123244c8d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135445994s
Oct  6 23:51:23.525: INFO: Pod "downwardapi-volume-1e765597-ecc4-43b8-9052-6123244c8d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159978252s
Oct  6 23:51:25.551: INFO: Pod "downwardapi-volume-1e765597-ecc4-43b8-9052-6123244c8d5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.185416444s
STEP: Saw pod success
Oct  6 23:51:25.551: INFO: Pod "downwardapi-volume-1e765597-ecc4-43b8-9052-6123244c8d5c" satisfied condition "Succeeded or Failed"
Oct  6 23:51:25.574: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod downwardapi-volume-1e765597-ecc4-43b8-9052-6123244c8d5c container client-container: <nil>
STEP: delete the pod
Oct  6 23:51:25.648: INFO: Waiting for pod downwardapi-volume-1e765597-ecc4-43b8-9052-6123244c8d5c to disappear
Oct  6 23:51:25.671: INFO: Pod downwardapi-volume-1e765597-ecc4-43b8-9052-6123244c8d5c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 20 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Oct  6 23:51:15.210: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  6 23:51:15.266: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-5rsc
STEP: Creating a pod to test subpath
Oct  6 23:51:15.330: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-5rsc" in namespace "provisioning-8702" to be "Succeeded or Failed"
Oct  6 23:51:15.383: INFO: Pod "pod-subpath-test-inlinevolume-5rsc": Phase="Pending", Reason="", readiness=false. Elapsed: 53.143931ms
Oct  6 23:51:17.414: INFO: Pod "pod-subpath-test-inlinevolume-5rsc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084044272s
Oct  6 23:51:19.440: INFO: Pod "pod-subpath-test-inlinevolume-5rsc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110073976s
Oct  6 23:51:21.466: INFO: Pod "pod-subpath-test-inlinevolume-5rsc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136359057s
Oct  6 23:51:23.493: INFO: Pod "pod-subpath-test-inlinevolume-5rsc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162734582s
Oct  6 23:51:25.520: INFO: Pod "pod-subpath-test-inlinevolume-5rsc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.189710629s
STEP: Saw pod success
Oct  6 23:51:25.520: INFO: Pod "pod-subpath-test-inlinevolume-5rsc" satisfied condition "Succeeded or Failed"
Oct  6 23:51:25.546: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-subpath-test-inlinevolume-5rsc container test-container-subpath-inlinevolume-5rsc: <nil>
STEP: delete the pod
Oct  6 23:51:25.643: INFO: Waiting for pod pod-subpath-test-inlinevolume-5rsc to disappear
Oct  6 23:51:25.667: INFO: Pod pod-subpath-test-inlinevolume-5rsc no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-5rsc
Oct  6 23:51:25.667: INFO: Deleting pod "pod-subpath-test-inlinevolume-5rsc" in namespace "provisioning-8702"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:25.833: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 138 lines ...
Oct  6 23:51:20.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 23:51:22.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 23:51:24.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161076, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 23:51:27.791: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:51:28.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5953" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:13.245 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:28.333: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:51:25.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct  6 23:51:25.907: INFO: Waiting up to 5m0s for pod "pod-f6607ec4-753b-4fd4-8d5d-6c3a1dfcfaea" in namespace "emptydir-8706" to be "Succeeded or Failed"
Oct  6 23:51:25.930: INFO: Pod "pod-f6607ec4-753b-4fd4-8d5d-6c3a1dfcfaea": Phase="Pending", Reason="", readiness=false. Elapsed: 23.481452ms
Oct  6 23:51:27.984: INFO: Pod "pod-f6607ec4-753b-4fd4-8d5d-6c3a1dfcfaea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07701015s
Oct  6 23:51:30.009: INFO: Pod "pod-f6607ec4-753b-4fd4-8d5d-6c3a1dfcfaea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102087479s
STEP: Saw pod success
Oct  6 23:51:30.009: INFO: Pod "pod-f6607ec4-753b-4fd4-8d5d-6c3a1dfcfaea" satisfied condition "Succeeded or Failed"
Oct  6 23:51:30.034: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-f6607ec4-753b-4fd4-8d5d-6c3a1dfcfaea container test-container: <nil>
STEP: delete the pod
Oct  6 23:51:30.339: INFO: Waiting for pod pod-f6607ec4-753b-4fd4-8d5d-6c3a1dfcfaea to disappear
Oct  6 23:51:30.363: INFO: Pod pod-f6607ec4-753b-4fd4-8d5d-6c3a1dfcfaea no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 8 lines ...
Oct  6 23:51:26.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Oct  6 23:51:26.306: INFO: Waiting up to 5m0s for pod "var-expansion-c21c3fd2-a73d-4d68-bcbb-061f836085a9" in namespace "var-expansion-9520" to be "Succeeded or Failed"
Oct  6 23:51:26.330: INFO: Pod "var-expansion-c21c3fd2-a73d-4d68-bcbb-061f836085a9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.781184ms
Oct  6 23:51:28.355: INFO: Pod "var-expansion-c21c3fd2-a73d-4d68-bcbb-061f836085a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049267366s
Oct  6 23:51:30.381: INFO: Pod "var-expansion-c21c3fd2-a73d-4d68-bcbb-061f836085a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075724239s
STEP: Saw pod success
Oct  6 23:51:30.381: INFO: Pod "var-expansion-c21c3fd2-a73d-4d68-bcbb-061f836085a9" satisfied condition "Succeeded or Failed"
Oct  6 23:51:30.405: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod var-expansion-c21c3fd2-a73d-4d68-bcbb-061f836085a9 container dapi-container: <nil>
STEP: delete the pod
Oct  6 23:51:30.499: INFO: Waiting for pod var-expansion-c21c3fd2-a73d-4d68-bcbb-061f836085a9 to disappear
Oct  6 23:51:30.523: INFO: Pod var-expansion-c21c3fd2-a73d-4d68-bcbb-061f836085a9 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:51:30.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9520" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:30.606: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 52 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:31.394: INFO: Only supported for providers [aws] (not gce)
... skipping 79 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:51:15.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
• [SLOW TEST:16.585 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:51:33.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1378" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":2,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:33.229: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 71 lines ...
W1006 23:51:16.878267    5757 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct  6 23:51:16.878: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct  6 23:51:16.958: INFO: Waiting up to 5m0s for pod "security-context-936fda92-4605-41d8-b03d-7106cb7d0b05" in namespace "security-context-5780" to be "Succeeded or Failed"
Oct  6 23:51:16.999: INFO: Pod "security-context-936fda92-4605-41d8-b03d-7106cb7d0b05": Phase="Pending", Reason="", readiness=false. Elapsed: 40.812442ms
Oct  6 23:51:19.025: INFO: Pod "security-context-936fda92-4605-41d8-b03d-7106cb7d0b05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066374972s
Oct  6 23:51:21.050: INFO: Pod "security-context-936fda92-4605-41d8-b03d-7106cb7d0b05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09167938s
Oct  6 23:51:23.075: INFO: Pod "security-context-936fda92-4605-41d8-b03d-7106cb7d0b05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116811985s
Oct  6 23:51:25.101: INFO: Pod "security-context-936fda92-4605-41d8-b03d-7106cb7d0b05": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142741102s
Oct  6 23:51:27.126: INFO: Pod "security-context-936fda92-4605-41d8-b03d-7106cb7d0b05": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168110896s
Oct  6 23:51:29.155: INFO: Pod "security-context-936fda92-4605-41d8-b03d-7106cb7d0b05": Phase="Pending", Reason="", readiness=false. Elapsed: 12.196185031s
Oct  6 23:51:31.198: INFO: Pod "security-context-936fda92-4605-41d8-b03d-7106cb7d0b05": Phase="Pending", Reason="", readiness=false. Elapsed: 14.239305967s
Oct  6 23:51:33.229: INFO: Pod "security-context-936fda92-4605-41d8-b03d-7106cb7d0b05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.270773837s
STEP: Saw pod success
Oct  6 23:51:33.229: INFO: Pod "security-context-936fda92-4605-41d8-b03d-7106cb7d0b05" satisfied condition "Succeeded or Failed"
Oct  6 23:51:33.260: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod security-context-936fda92-4605-41d8-b03d-7106cb7d0b05 container test-container: <nil>
STEP: delete the pod
Oct  6 23:51:33.341: INFO: Waiting for pod security-context-936fda92-4605-41d8-b03d-7106cb7d0b05 to disappear
Oct  6 23:51:33.380: INFO: Pod security-context-936fda92-4605-41d8-b03d-7106cb7d0b05 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:18.445 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:33.555: INFO: Only supported for providers [aws] (not gce)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 92 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 45 lines ...
• [SLOW TEST:20.754 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1198
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555
    should update a single-container pod's image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:22.774 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":1,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:37.925: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
Oct  6 23:51:37.451: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct  6 23:51:37.451: INFO: stdout: "etcd-0 scheduler controller-manager etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of etcd-0
Oct  6 23:51:37.451: INFO: Running '/tmp/kubectl2777438504/kubectl --server=https://34.106.187.92 --kubeconfig=/root/.kube/config --namespace=kubectl-4125 get componentstatuses etcd-0'
Oct  6 23:51:37.610: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct  6 23:51:37.610: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-0   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
STEP: getting status of scheduler
Oct  6 23:51:37.610: INFO: Running '/tmp/kubectl2777438504/kubectl --server=https://34.106.187.92 --kubeconfig=/root/.kube/config --namespace=kubectl-4125 get componentstatuses scheduler'
Oct  6 23:51:37.756: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct  6 23:51:37.756: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of controller-manager
Oct  6 23:51:37.756: INFO: Running '/tmp/kubectl2777438504/kubectl --server=https://34.106.187.92 --kubeconfig=/root/.kube/config --namespace=kubectl-4125 get componentstatuses controller-manager'
Oct  6 23:51:37.907: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct  6 23:51:37.908: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-1
Oct  6 23:51:37.908: INFO: Running '/tmp/kubectl2777438504/kubectl --server=https://34.106.187.92 --kubeconfig=/root/.kube/config --namespace=kubectl-4125 get componentstatuses etcd-1'
Oct  6 23:51:38.061: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct  6 23:51:38.061: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-1   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:51:38.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4125" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":3,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:38.146: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 60 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Oct  6 23:51:34.002: INFO: Waiting up to 5m0s for pod "metadata-volume-8ba40e8f-0ceb-4b99-a636-0b7ed464419f" in namespace "projected-9747" to be "Succeeded or Failed"
Oct  6 23:51:34.174: INFO: Pod "metadata-volume-8ba40e8f-0ceb-4b99-a636-0b7ed464419f": Phase="Pending", Reason="", readiness=false. Elapsed: 171.980579ms
Oct  6 23:51:36.205: INFO: Pod "metadata-volume-8ba40e8f-0ceb-4b99-a636-0b7ed464419f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202674704s
Oct  6 23:51:38.231: INFO: Pod "metadata-volume-8ba40e8f-0ceb-4b99-a636-0b7ed464419f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.229006469s
STEP: Saw pod success
Oct  6 23:51:38.231: INFO: Pod "metadata-volume-8ba40e8f-0ceb-4b99-a636-0b7ed464419f" satisfied condition "Succeeded or Failed"
Oct  6 23:51:38.268: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod metadata-volume-8ba40e8f-0ceb-4b99-a636-0b7ed464419f container client-container: <nil>
STEP: delete the pod
Oct  6 23:51:38.351: INFO: Waiting for pod metadata-volume-8ba40e8f-0ceb-4b99-a636-0b7ed464419f to disappear
Oct  6 23:51:38.380: INFO: Pod metadata-volume-8ba40e8f-0ceb-4b99-a636-0b7ed464419f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:51:38.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9747" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:38.469: INFO: Only supported for providers [aws] (not gce)
... skipping 86 lines ...
Oct  6 23:51:35.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Oct  6 23:51:35.933: INFO: Waiting up to 5m0s for pod "var-expansion-93edd39e-0a8a-4fb6-92d3-4c52a7da54a5" in namespace "var-expansion-4723" to be "Succeeded or Failed"
Oct  6 23:51:35.957: INFO: Pod "var-expansion-93edd39e-0a8a-4fb6-92d3-4c52a7da54a5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.510629ms
Oct  6 23:51:37.982: INFO: Pod "var-expansion-93edd39e-0a8a-4fb6-92d3-4c52a7da54a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047928033s
Oct  6 23:51:40.006: INFO: Pod "var-expansion-93edd39e-0a8a-4fb6-92d3-4c52a7da54a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072859261s
STEP: Saw pod success
Oct  6 23:51:40.007: INFO: Pod "var-expansion-93edd39e-0a8a-4fb6-92d3-4c52a7da54a5" satisfied condition "Succeeded or Failed"
Oct  6 23:51:40.029: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod var-expansion-93edd39e-0a8a-4fb6-92d3-4c52a7da54a5 container dapi-container: <nil>
STEP: delete the pod
Oct  6 23:51:40.099: INFO: Waiting for pod var-expansion-93edd39e-0a8a-4fb6-92d3-4c52a7da54a5 to disappear
Oct  6 23:51:40.123: INFO: Pod var-expansion-93edd39e-0a8a-4fb6-92d3-4c52a7da54a5 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:51:40.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4723" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:51:30.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 85 lines ...
• [SLOW TEST:29.130 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":1,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:44.248: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 178 lines ...
• [SLOW TEST:30.092 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:45.326: INFO: Only supported for providers [azure] (not gce)
... skipping 60 lines ...
• [SLOW TEST:17.532 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":3,"skipped":3,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:50.032: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
Oct  6 23:51:38.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Oct  6 23:51:38.344: INFO: Waiting up to 5m0s for pod "var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9" in namespace "var-expansion-2689" to be "Succeeded or Failed"
Oct  6 23:51:38.368: INFO: Pod "var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.727892ms
Oct  6 23:51:40.392: INFO: Pod "var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047386573s
Oct  6 23:51:42.429: INFO: Pod "var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084322634s
Oct  6 23:51:44.493: INFO: Pod "var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148231981s
Oct  6 23:51:46.516: INFO: Pod "var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17203043s
Oct  6 23:51:48.541: INFO: Pod "var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.196420818s
Oct  6 23:51:50.566: INFO: Pod "var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.221221754s
Oct  6 23:51:52.590: INFO: Pod "var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.246099821s
STEP: Saw pod success
Oct  6 23:51:52.590: INFO: Pod "var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9" satisfied condition "Succeeded or Failed"
Oct  6 23:51:52.614: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9 container dapi-container: <nil>
STEP: delete the pod
Oct  6 23:51:52.844: INFO: Waiting for pod var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9 to disappear
Oct  6 23:51:52.867: INFO: Pod var-expansion-38e9d5aa-0e31-4140-9bab-faaa336836e9 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.755 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:51:52.947: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-57552b35-33a3-4544-9a14-d8f508796e48
STEP: Creating a pod to test consume secrets
Oct  6 23:51:38.150: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230" in namespace "projected-6285" to be "Succeeded or Failed"
Oct  6 23:51:38.174: INFO: Pod "pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230": Phase="Pending", Reason="", readiness=false. Elapsed: 24.14399ms
Oct  6 23:51:40.200: INFO: Pod "pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050300209s
Oct  6 23:51:42.231: INFO: Pod "pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080542556s
Oct  6 23:51:44.355: INFO: Pod "pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205260657s
Oct  6 23:51:46.385: INFO: Pod "pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230": Phase="Pending", Reason="", readiness=false. Elapsed: 8.234907704s
Oct  6 23:51:48.423: INFO: Pod "pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230": Phase="Pending", Reason="", readiness=false. Elapsed: 10.273390374s
Oct  6 23:51:50.448: INFO: Pod "pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230": Phase="Pending", Reason="", readiness=false. Elapsed: 12.298100879s
Oct  6 23:51:52.477: INFO: Pod "pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230": Phase="Pending", Reason="", readiness=false. Elapsed: 14.327134693s
Oct  6 23:51:54.504: INFO: Pod "pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.353894267s
STEP: Saw pod success
Oct  6 23:51:54.504: INFO: Pod "pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230" satisfied condition "Succeeded or Failed"
Oct  6 23:51:54.530: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230 container secret-volume-test: <nil>
STEP: delete the pod
Oct  6 23:51:54.659: INFO: Waiting for pod pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230 to disappear
Oct  6 23:51:54.684: INFO: Pod pod-projected-secrets-97753d6e-58e1-49e3-8d07-909392c0e230 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 36 lines ...
• [SLOW TEST:42.489 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:582
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 68 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  6 23:51:53.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bbdc7c3-37cc-43a4-9f78-3727a9bba7d0" in namespace "downward-api-7503" to be "Succeeded or Failed"
Oct  6 23:51:53.184: INFO: Pod "downwardapi-volume-7bbdc7c3-37cc-43a4-9f78-3727a9bba7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 23.540771ms
Oct  6 23:51:55.209: INFO: Pod "downwardapi-volume-7bbdc7c3-37cc-43a4-9f78-3727a9bba7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048455242s
Oct  6 23:51:57.234: INFO: Pod "downwardapi-volume-7bbdc7c3-37cc-43a4-9f78-3727a9bba7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073842994s
Oct  6 23:51:59.259: INFO: Pod "downwardapi-volume-7bbdc7c3-37cc-43a4-9f78-3727a9bba7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098116131s
Oct  6 23:52:01.284: INFO: Pod "downwardapi-volume-7bbdc7c3-37cc-43a4-9f78-3727a9bba7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123262301s
Oct  6 23:52:03.309: INFO: Pod "downwardapi-volume-7bbdc7c3-37cc-43a4-9f78-3727a9bba7d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148478838s
STEP: Saw pod success
Oct  6 23:52:03.309: INFO: Pod "downwardapi-volume-7bbdc7c3-37cc-43a4-9f78-3727a9bba7d0" satisfied condition "Succeeded or Failed"
Oct  6 23:52:03.334: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod downwardapi-volume-7bbdc7c3-37cc-43a4-9f78-3727a9bba7d0 container client-container: <nil>
STEP: delete the pod
Oct  6 23:52:03.395: INFO: Waiting for pod downwardapi-volume-7bbdc7c3-37cc-43a4-9f78-3727a9bba7d0 to disappear
Oct  6 23:52:03.419: INFO: Pod downwardapi-volume-7bbdc7c3-37cc-43a4-9f78-3727a9bba7d0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.468 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:03.508: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 48 lines ...
• [SLOW TEST:20.706 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should observe PodDisruptionBudget status updated [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:06.122: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:06.766: INFO: Only supported for providers [aws] (not gce)
... skipping 60 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:51:38.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
• [SLOW TEST:30.674 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:09.191: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 133 lines ...
• [SLOW TEST:29.056 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":1,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-api-machinery] Generated clientset
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:52:17.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename clientset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:52:18.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-9193" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":2,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:18.293: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:52:18.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4990" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:52:18.566: INFO: >>> kubeConfig: /root/.kube/config
... skipping 103 lines ...
Oct  6 23:51:42.303: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-h428h] to have phase Bound
Oct  6 23:51:42.328: INFO: PersistentVolumeClaim pvc-h428h found but phase is Pending instead of Bound.
Oct  6 23:51:44.383: INFO: PersistentVolumeClaim pvc-h428h found and phase=Bound (2.07932719s)
Oct  6 23:51:44.383: INFO: Waiting up to 3m0s for PersistentVolume gce-jf9lh to have phase Bound
Oct  6 23:51:44.480: INFO: PersistentVolume gce-jf9lh found and phase=Bound (97.32954ms)
STEP: Creating the Client Pod
[It] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127
STEP: Deleting the Claim
Oct  6 23:52:08.852: INFO: Deleting PersistentVolumeClaim "pvc-h428h"
STEP: Deleting the Pod
Oct  6 23:52:08.982: INFO: Deleting pod "pvc-tester-rtnxv" in namespace "pv-6311"
Oct  6 23:52:09.010: INFO: Wait up to 5m0s for pod "pvc-tester-rtnxv" to be fully deleted
... skipping 14 lines ...
Oct  6 23:52:25.289: INFO: Successfully deleted PD "e2e-b978ce58-fd64-47aa-a482-a7b21000a7c4".


• [SLOW TEST:45.092 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:51:25.739: INFO: >>> kubeConfig: /root/.kube/config
... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:25.479: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 178 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":1,"skipped":8,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:31.075: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:52:31.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8331" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 28 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-3b0a4752-1fc9-406a-8602-fcf408117b9c
STEP: Creating a pod to test consume secrets
Oct  6 23:51:57.808: INFO: Waiting up to 5m0s for pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175" in namespace "secrets-2239" to be "Succeeded or Failed"
Oct  6 23:51:57.832: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175": Phase="Pending", Reason="", readiness=false. Elapsed: 23.860195ms
Oct  6 23:51:59.855: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047049012s
Oct  6 23:52:01.880: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071805872s
Oct  6 23:52:03.903: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095725446s
Oct  6 23:52:05.930: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121818035s
Oct  6 23:52:07.958: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175": Phase="Pending", Reason="", readiness=false. Elapsed: 10.150590628s
... skipping 7 lines ...
Oct  6 23:52:24.159: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175": Phase="Pending", Reason="", readiness=false. Elapsed: 26.351309857s
Oct  6 23:52:26.183: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175": Phase="Pending", Reason="", readiness=false. Elapsed: 28.375459243s
Oct  6 23:52:28.208: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175": Phase="Pending", Reason="", readiness=false. Elapsed: 30.400227437s
Oct  6 23:52:30.232: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175": Phase="Pending", Reason="", readiness=false. Elapsed: 32.424120197s
Oct  6 23:52:32.256: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.447876191s
STEP: Saw pod success
Oct  6 23:52:32.256: INFO: Pod "pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175" satisfied condition "Succeeded or Failed"
Oct  6 23:52:32.280: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175 container secret-volume-test: <nil>
STEP: delete the pod
Oct  6 23:52:32.357: INFO: Waiting for pod pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175 to disappear
Oct  6 23:52:32.381: INFO: Pod pod-secrets-dfaad5a2-79c3-4d09-bc58-a4fd20e83175 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 31 lines ...
• [SLOW TEST:27.391 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a private image
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:113
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a private image","total":-1,"completed":3,"skipped":27,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":27,"failed":0}
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:51:54.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Oct  6 23:51:54.904: INFO: Waiting up to 5m0s for pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0" in namespace "svcaccounts-2575" to be "Succeeded or Failed"
Oct  6 23:51:54.928: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.718633ms
Oct  6 23:51:56.964: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059923486s
Oct  6 23:51:58.989: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085437315s
Oct  6 23:52:01.016: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112750261s
Oct  6 23:52:03.042: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138555836s
Oct  6 23:52:05.069: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.165371062s
... skipping 9 lines ...
Oct  6 23:52:25.352: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.448526913s
Oct  6 23:52:27.377: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.47348809s
Oct  6 23:52:29.416: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.512420753s
Oct  6 23:52:31.444: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.54041599s
Oct  6 23:52:33.470: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.56605149s
STEP: Saw pod success
Oct  6 23:52:33.470: INFO: Pod "test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0" satisfied condition "Succeeded or Failed"
Oct  6 23:52:33.516: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0 container agnhost-container: <nil>
STEP: delete the pod
Oct  6 23:52:33.578: INFO: Waiting for pod test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0 to disappear
Oct  6 23:52:33.606: INFO: Pod test-pod-fef9875e-ebdf-46a5-8ce4-15967a39efb0 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:38.911 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:33.681: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 91 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-5d0a37d1-d638-4d98-9617-c3784414c710
STEP: Creating a pod to test consume configMaps
Oct  6 23:52:00.841: INFO: Waiting up to 5m0s for pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e" in namespace "configmap-2657" to be "Succeeded or Failed"
Oct  6 23:52:00.865: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.432069ms
Oct  6 23:52:02.892: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050967071s
Oct  6 23:52:04.918: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077383232s
Oct  6 23:52:06.945: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10418181s
Oct  6 23:52:08.971: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129697577s
Oct  6 23:52:10.998: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157309004s
... skipping 7 lines ...
Oct  6 23:52:27.215: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.373974917s
Oct  6 23:52:29.241: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.399925656s
Oct  6 23:52:31.271: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.429675503s
Oct  6 23:52:33.296: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e": Phase="Pending", Reason="", readiness=false. Elapsed: 32.455318833s
Oct  6 23:52:35.322: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.480906996s
STEP: Saw pod success
Oct  6 23:52:35.322: INFO: Pod "pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e" satisfied condition "Succeeded or Failed"
Oct  6 23:52:35.347: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e container agnhost-container: <nil>
STEP: delete the pod
Oct  6 23:52:35.408: INFO: Waiting for pod pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e to disappear
Oct  6 23:52:35.432: INFO: Pod pod-configmaps-648c1946-8e86-49d8-a34e-5999c95a406e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:34.833 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should not run with an explicit root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:52:32.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:5.180 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:37.664: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 167 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:37.944: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 43 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:52:03.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:52:43.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-156" for this suite.


• [SLOW TEST:40.224 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":6,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:43.767: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 24 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:52:43.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":7,"skipped":44,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:52:37.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Oct  6 23:52:37.830: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-9eef40bc-dff9-4ff7-b52a-f447fae81714" in namespace "security-context-test-9213" to be "Succeeded or Failed"
Oct  6 23:52:37.853: INFO: Pod "busybox-readonly-true-9eef40bc-dff9-4ff7-b52a-f447fae81714": Phase="Pending", Reason="", readiness=false. Elapsed: 23.243911ms
Oct  6 23:52:39.877: INFO: Pod "busybox-readonly-true-9eef40bc-dff9-4ff7-b52a-f447fae81714": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047736813s
Oct  6 23:52:41.901: INFO: Pod "busybox-readonly-true-9eef40bc-dff9-4ff7-b52a-f447fae81714": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071430978s
Oct  6 23:52:43.928: INFO: Pod "busybox-readonly-true-9eef40bc-dff9-4ff7-b52a-f447fae81714": Phase="Failed", Reason="", readiness=false. Elapsed: 6.098043143s
Oct  6 23:52:43.928: INFO: Pod "busybox-readonly-true-9eef40bc-dff9-4ff7-b52a-f447fae81714" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:52:43.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9213" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:52:35.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-bc744646-8029-4fcc-9a21-1235db871d4c
STEP: Creating a pod to test consume configMaps
Oct  6 23:52:35.691: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c5cb63f1-2c97-43dd-ad4f-c3bf08bc47d8" in namespace "projected-2934" to be "Succeeded or Failed"
Oct  6 23:52:35.716: INFO: Pod "pod-projected-configmaps-c5cb63f1-2c97-43dd-ad4f-c3bf08bc47d8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.49203ms
Oct  6 23:52:37.745: INFO: Pod "pod-projected-configmaps-c5cb63f1-2c97-43dd-ad4f-c3bf08bc47d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053754285s
Oct  6 23:52:39.770: INFO: Pod "pod-projected-configmaps-c5cb63f1-2c97-43dd-ad4f-c3bf08bc47d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07875181s
Oct  6 23:52:41.797: INFO: Pod "pod-projected-configmaps-c5cb63f1-2c97-43dd-ad4f-c3bf08bc47d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10593001s
Oct  6 23:52:43.823: INFO: Pod "pod-projected-configmaps-c5cb63f1-2c97-43dd-ad4f-c3bf08bc47d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131526612s
STEP: Saw pod success
Oct  6 23:52:43.823: INFO: Pod "pod-projected-configmaps-c5cb63f1-2c97-43dd-ad4f-c3bf08bc47d8" satisfied condition "Succeeded or Failed"
Oct  6 23:52:43.850: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-projected-configmaps-c5cb63f1-2c97-43dd-ad4f-c3bf08bc47d8 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Oct  6 23:52:43.914: INFO: Waiting for pod pod-projected-configmaps-c5cb63f1-2c97-43dd-ad4f-c3bf08bc47d8 to disappear
Oct  6 23:52:43.948: INFO: Pod pod-projected-configmaps-c5cb63f1-2c97-43dd-ad4f-c3bf08bc47d8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.494 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:44.007: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
... skipping 113 lines ...
Oct  6 23:52:25.352: INFO: PersistentVolumeClaim pvc-ldmng found but phase is Pending instead of Bound.
Oct  6 23:52:27.377: INFO: PersistentVolumeClaim pvc-ldmng found and phase=Bound (6.111583874s)
Oct  6 23:52:27.378: INFO: Waiting up to 3m0s for PersistentVolume local-vxz47 to have phase Bound
Oct  6 23:52:27.402: INFO: PersistentVolume local-vxz47 found and phase=Bound (23.951321ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-68wc
STEP: Creating a pod to test subpath
Oct  6 23:52:27.483: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-68wc" in namespace "provisioning-1739" to be "Succeeded or Failed"
Oct  6 23:52:27.516: INFO: Pod "pod-subpath-test-preprovisionedpv-68wc": Phase="Pending", Reason="", readiness=false. Elapsed: 33.257403ms
Oct  6 23:52:29.542: INFO: Pod "pod-subpath-test-preprovisionedpv-68wc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058790716s
Oct  6 23:52:31.569: INFO: Pod "pod-subpath-test-preprovisionedpv-68wc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085741032s
Oct  6 23:52:33.599: INFO: Pod "pod-subpath-test-preprovisionedpv-68wc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116481171s
Oct  6 23:52:35.627: INFO: Pod "pod-subpath-test-preprovisionedpv-68wc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144222827s
Oct  6 23:52:37.655: INFO: Pod "pod-subpath-test-preprovisionedpv-68wc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.172536631s
Oct  6 23:52:39.682: INFO: Pod "pod-subpath-test-preprovisionedpv-68wc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.19946247s
Oct  6 23:52:41.720: INFO: Pod "pod-subpath-test-preprovisionedpv-68wc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.237093784s
Oct  6 23:52:43.748: INFO: Pod "pod-subpath-test-preprovisionedpv-68wc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.264823161s
STEP: Saw pod success
Oct  6 23:52:43.748: INFO: Pod "pod-subpath-test-preprovisionedpv-68wc" satisfied condition "Succeeded or Failed"
Oct  6 23:52:43.775: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-subpath-test-preprovisionedpv-68wc container test-container-volume-preprovisionedpv-68wc: <nil>
STEP: delete the pod
Oct  6 23:52:43.847: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-68wc to disappear
Oct  6 23:52:43.872: INFO: Pod pod-subpath-test-preprovisionedpv-68wc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-68wc
Oct  6 23:52:43.872: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-68wc" in namespace "provisioning-1739"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:44.472: INFO: Only supported for providers [vsphere] (not gce)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
... skipping 41 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:52:44.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3602" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":8,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:44.791: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 114 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:52:45.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-8332" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":9,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:45.526: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 58 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct  6 23:52:46.511: INFO: Successfully updated pod "pod-update-activedeadlineseconds-84675fb6-d0c5-4e0e-91a9-09b94f0b7018"
Oct  6 23:52:46.511: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-84675fb6-d0c5-4e0e-91a9-09b94f0b7018" in namespace "pods-4157" to be "terminated due to deadline exceeded"
Oct  6 23:52:46.536: INFO: Pod "pod-update-activedeadlineseconds-84675fb6-d0c5-4e0e-91a9-09b94f0b7018": Phase="Running", Reason="", readiness=true. Elapsed: 24.198848ms
Oct  6 23:52:48.562: INFO: Pod "pod-update-activedeadlineseconds-84675fb6-d0c5-4e0e-91a9-09b94f0b7018": Phase="Running", Reason="", readiness=true. Elapsed: 2.050062491s
Oct  6 23:52:50.589: INFO: Pod "pod-update-activedeadlineseconds-84675fb6-d0c5-4e0e-91a9-09b94f0b7018": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 4.07714547s
Oct  6 23:52:50.589: INFO: Pod "pod-update-activedeadlineseconds-84675fb6-d0c5-4e0e-91a9-09b94f0b7018" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:52:50.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4157" for this suite.


• [SLOW TEST:14.950 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":3,"skipped":3,"failed":0}
[BeforeEach] [sig-node] kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:51:42.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 104 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":4,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 62 lines ...
Oct  6 23:51:57.667: INFO: PersistentVolumeClaim csi-hostpath7pdnw found but phase is Pending instead of Bound.
Oct  6 23:51:59.700: INFO: PersistentVolumeClaim csi-hostpath7pdnw found but phase is Pending instead of Bound.
Oct  6 23:52:01.725: INFO: PersistentVolumeClaim csi-hostpath7pdnw found but phase is Pending instead of Bound.
Oct  6 23:52:03.749: INFO: PersistentVolumeClaim csi-hostpath7pdnw found and phase=Bound (12.188912337s)
STEP: Expanding non-expandable pvc
Oct  6 23:52:03.798: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct  6 23:52:03.852: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:05.912: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:07.901: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:09.907: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:11.912: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:13.907: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:15.901: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:17.900: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:19.902: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:21.906: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:23.916: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:25.901: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:27.899: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:29.903: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:31.902: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:33.903: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  6 23:52:33.955: INFO: Error updating pvc csi-hostpath7pdnw: persistentvolumeclaims "csi-hostpath7pdnw" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Oct  6 23:52:33.955: INFO: Deleting PersistentVolumeClaim "csi-hostpath7pdnw"
Oct  6 23:52:33.981: INFO: Waiting up to 5m0s for PersistentVolume pvc-df608c41-429b-422c-ac0e-7fe5e44332a2 to get deleted
Oct  6 23:52:34.005: INFO: PersistentVolume pvc-df608c41-429b-422c-ac0e-7fe5e44332a2 found and phase=Released (24.155887ms)
Oct  6 23:52:39.030: INFO: PersistentVolume pvc-df608c41-429b-422c-ac0e-7fe5e44332a2 was removed
STEP: Deleting sc
... skipping 74 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":4,"skipped":13,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:52.280: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 101 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:52:52.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6444" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:52.757: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":37,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
STEP: Creating a pod to test downward API volume plugin
Oct  6 23:52:45.731: INFO: Waiting up to 5m0s for pod "metadata-volume-9c192550-1b7c-4a46-af7f-0f8ec9503f0a" in namespace "downward-api-421" to be "Succeeded or Failed"
Oct  6 23:52:45.755: INFO: Pod "metadata-volume-9c192550-1b7c-4a46-af7f-0f8ec9503f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.122289ms
Oct  6 23:52:47.781: INFO: Pod "metadata-volume-9c192550-1b7c-4a46-af7f-0f8ec9503f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049443562s
Oct  6 23:52:49.807: INFO: Pod "metadata-volume-9c192550-1b7c-4a46-af7f-0f8ec9503f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075608163s
Oct  6 23:52:51.832: INFO: Pod "metadata-volume-9c192550-1b7c-4a46-af7f-0f8ec9503f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100639412s
Oct  6 23:52:53.857: INFO: Pod "metadata-volume-9c192550-1b7c-4a46-af7f-0f8ec9503f0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125228993s
STEP: Saw pod success
Oct  6 23:52:53.857: INFO: Pod "metadata-volume-9c192550-1b7c-4a46-af7f-0f8ec9503f0a" satisfied condition "Succeeded or Failed"
Oct  6 23:52:53.880: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod metadata-volume-9c192550-1b7c-4a46-af7f-0f8ec9503f0a container client-container: <nil>
STEP: delete the pod
Oct  6 23:52:53.942: INFO: Waiting for pod metadata-volume-9c192550-1b7c-4a46-af7f-0f8ec9503f0a to disappear
Oct  6 23:52:53.965: INFO: Pod metadata-volume-9c192550-1b7c-4a46-af7f-0f8ec9503f0a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":10,"skipped":71,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:54.033: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:52:54.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-7916" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":11,"skipped":82,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:54.308: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 111 lines ...
Oct  6 23:52:44.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct  6 23:52:44.244: INFO: Waiting up to 5m0s for pod "pod-b4536c21-1ef8-4b62-8fed-533783fa9cb7" in namespace "emptydir-3175" to be "Succeeded or Failed"
Oct  6 23:52:44.270: INFO: Pod "pod-b4536c21-1ef8-4b62-8fed-533783fa9cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.643137ms
Oct  6 23:52:46.297: INFO: Pod "pod-b4536c21-1ef8-4b62-8fed-533783fa9cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052550407s
Oct  6 23:52:48.322: INFO: Pod "pod-b4536c21-1ef8-4b62-8fed-533783fa9cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077589119s
Oct  6 23:52:50.347: INFO: Pod "pod-b4536c21-1ef8-4b62-8fed-533783fa9cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102740646s
Oct  6 23:52:52.374: INFO: Pod "pod-b4536c21-1ef8-4b62-8fed-533783fa9cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129142264s
Oct  6 23:52:54.398: INFO: Pod "pod-b4536c21-1ef8-4b62-8fed-533783fa9cb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.153462275s
STEP: Saw pod success
Oct  6 23:52:54.398: INFO: Pod "pod-b4536c21-1ef8-4b62-8fed-533783fa9cb7" satisfied condition "Succeeded or Failed"
Oct  6 23:52:54.423: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-b4536c21-1ef8-4b62-8fed-533783fa9cb7 container test-container: <nil>
STEP: delete the pod
Oct  6 23:52:54.540: INFO: Waiting for pod pod-b4536c21-1ef8-4b62-8fed-533783fa9cb7 to disappear
Oct  6 23:52:54.564: INFO: Pod pod-b4536c21-1ef8-4b62-8fed-533783fa9cb7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.608 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
Oct  6 23:52:15.702: INFO: PersistentVolumeClaim pvc-hw9kl found but phase is Pending instead of Bound.
Oct  6 23:52:17.728: INFO: PersistentVolumeClaim pvc-hw9kl found and phase=Bound (2.053602711s)
STEP: Deleting the previously created pod
Oct  6 23:52:35.852: INFO: Deleting pod "pvc-volume-tester-5rm9m" in namespace "csi-mock-volumes-9354"
Oct  6 23:52:35.881: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5rm9m" to be fully deleted
STEP: Checking CSI driver logs
Oct  6 23:52:43.960: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/5ece2892-449d-4ebd-8785-b919ddb90013/volumes/kubernetes.io~csi/pvc-59958ce7-fe48-4a6a-aa3c-c171eb60d772/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-5rm9m
Oct  6 23:52:43.960: INFO: Deleting pod "pvc-volume-tester-5rm9m" in namespace "csi-mock-volumes-9354"
STEP: Deleting claim pvc-hw9kl
Oct  6 23:52:44.047: INFO: Waiting up to 2m0s for PersistentVolume pvc-59958ce7-fe48-4a6a-aa3c-c171eb60d772 to get deleted
Oct  6 23:52:44.077: INFO: PersistentVolume pvc-59958ce7-fe48-4a6a-aa3c-c171eb60d772 found and phase=Bound (29.247389ms)
Oct  6 23:52:46.101: INFO: PersistentVolume pvc-59958ce7-fe48-4a6a-aa3c-c171eb60d772 was removed
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should not be passed when podInfoOnMount=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":4,"skipped":31,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:52:59.199: INFO: Only supported for providers [azure] (not gce)
... skipping 96 lines ...
Oct  6 23:52:26.287: INFO: PersistentVolumeClaim pvc-gdfrv found but phase is Pending instead of Bound.
Oct  6 23:52:28.311: INFO: PersistentVolumeClaim pvc-gdfrv found and phase=Bound (2.046935807s)
Oct  6 23:52:28.311: INFO: Waiting up to 3m0s for PersistentVolume local-h9k8m to have phase Bound
Oct  6 23:52:28.336: INFO: PersistentVolume local-h9k8m found and phase=Bound (24.510859ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4nbw
STEP: Creating a pod to test subpath
Oct  6 23:52:28.410: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4nbw" in namespace "provisioning-707" to be "Succeeded or Failed"
Oct  6 23:52:28.433: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw": Phase="Pending", Reason="", readiness=false. Elapsed: 23.419177ms
Oct  6 23:52:30.457: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046926819s
Oct  6 23:52:32.481: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071371668s
Oct  6 23:52:34.506: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096662483s
Oct  6 23:52:36.529: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11973348s
Oct  6 23:52:38.559: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.149328453s
... skipping 5 lines ...
Oct  6 23:52:50.728: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw": Phase="Pending", Reason="", readiness=false. Elapsed: 22.317879849s
Oct  6 23:52:52.751: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw": Phase="Pending", Reason="", readiness=false. Elapsed: 24.341215086s
Oct  6 23:52:54.777: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw": Phase="Pending", Reason="", readiness=false. Elapsed: 26.367074271s
Oct  6 23:52:56.800: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw": Phase="Pending", Reason="", readiness=false. Elapsed: 28.390301465s
Oct  6 23:52:58.826: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.416372803s
STEP: Saw pod success
Oct  6 23:52:58.826: INFO: Pod "pod-subpath-test-preprovisionedpv-4nbw" satisfied condition "Succeeded or Failed"
Oct  6 23:52:58.851: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-subpath-test-preprovisionedpv-4nbw container test-container-subpath-preprovisionedpv-4nbw: <nil>
STEP: delete the pod
Oct  6 23:52:58.916: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4nbw to disappear
Oct  6 23:52:58.939: INFO: Pod pod-subpath-test-preprovisionedpv-4nbw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4nbw
Oct  6 23:52:58.939: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4nbw" in namespace "provisioning-707"
... skipping 32 lines ...
W1006 23:51:16.679574    5693 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct  6 23:51:16.679: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Oct  6 23:51:16.724: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  6 23:51:16.861: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-2394" in namespace "volume-2394" to be "Succeeded or Failed"
Oct  6 23:51:16.885: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Pending", Reason="", readiness=false. Elapsed: 23.577229ms
Oct  6 23:51:18.913: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052248452s
Oct  6 23:51:20.937: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075547574s
Oct  6 23:51:22.961: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099840636s
Oct  6 23:51:24.995: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134154729s
Oct  6 23:51:27.023: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Pending", Reason="", readiness=false. Elapsed: 10.161731127s
Oct  6 23:51:29.047: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Pending", Reason="", readiness=false. Elapsed: 12.186248952s
Oct  6 23:51:31.079: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.217584016s
STEP: Saw pod success
Oct  6 23:51:31.079: INFO: Pod "hostpath-symlink-prep-volume-2394" satisfied condition "Succeeded or Failed"
Oct  6 23:51:31.079: INFO: Deleting pod "hostpath-symlink-prep-volume-2394" in namespace "volume-2394"
Oct  6 23:51:31.114: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-2394" to be fully deleted
Oct  6 23:51:31.137: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Oct  6 23:51:43.245: INFO: Running '/tmp/kubectl2777438504/kubectl --server=https://34.106.187.92 --kubeconfig=/root/.kube/config --namespace=volume-2394 exec hostpathsymlink-injector --namespace=volume-2394 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-2394' > /opt/0/index.html'
... skipping 70 lines ...
Oct  6 23:52:50.837: INFO: Pod hostpathsymlink-client still exists
Oct  6 23:52:52.814: INFO: Waiting for pod hostpathsymlink-client to disappear
Oct  6 23:52:52.838: INFO: Pod hostpathsymlink-client still exists
Oct  6 23:52:54.813: INFO: Waiting for pod hostpathsymlink-client to disappear
Oct  6 23:52:54.835: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Oct  6 23:52:54.880: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-2394" in namespace "volume-2394" to be "Succeeded or Failed"
Oct  6 23:52:54.903: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Pending", Reason="", readiness=false. Elapsed: 23.329744ms
Oct  6 23:52:56.926: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046417184s
Oct  6 23:52:58.949: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069426766s
Oct  6 23:53:00.974: INFO: Pod "hostpath-symlink-prep-volume-2394": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094268903s
STEP: Saw pod success
Oct  6 23:53:00.974: INFO: Pod "hostpath-symlink-prep-volume-2394" satisfied condition "Succeeded or Failed"
Oct  6 23:53:00.974: INFO: Deleting pod "hostpath-symlink-prep-volume-2394" in namespace "volume-2394"
Oct  6 23:53:01.003: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-2394" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:53:01.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2394" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":1,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:01.094: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct  6 23:52:52.922: INFO: Waiting up to 5m0s for pod "pod-9fea7e30-953a-40e1-bc92-b8ae2e3f454f" in namespace "emptydir-5881" to be "Succeeded or Failed"
Oct  6 23:52:52.948: INFO: Pod "pod-9fea7e30-953a-40e1-bc92-b8ae2e3f454f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.31927ms
Oct  6 23:52:54.973: INFO: Pod "pod-9fea7e30-953a-40e1-bc92-b8ae2e3f454f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050765295s
Oct  6 23:52:56.999: INFO: Pod "pod-9fea7e30-953a-40e1-bc92-b8ae2e3f454f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077511252s
Oct  6 23:52:59.028: INFO: Pod "pod-9fea7e30-953a-40e1-bc92-b8ae2e3f454f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106282748s
Oct  6 23:53:01.053: INFO: Pod "pod-9fea7e30-953a-40e1-bc92-b8ae2e3f454f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131265602s
STEP: Saw pod success
Oct  6 23:53:01.053: INFO: Pod "pod-9fea7e30-953a-40e1-bc92-b8ae2e3f454f" satisfied condition "Succeeded or Failed"
Oct  6 23:53:01.077: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-9fea7e30-953a-40e1-bc92-b8ae2e3f454f container test-container: <nil>
STEP: delete the pod
Oct  6 23:53:01.150: INFO: Waiting for pod pod-9fea7e30-953a-40e1-bc92-b8ae2e3f454f to disappear
Oct  6 23:53:01.174: INFO: Pod pod-9fea7e30-953a-40e1-bc92-b8ae2e3f454f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":6,"skipped":48,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:52:25.302: INFO: >>> kubeConfig: /root/.kube/config
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:01.262: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 64 lines ...
Oct  6 23:52:21.674: INFO: PersistentVolumeClaim pvc-ntztr found but phase is Pending instead of Bound.
Oct  6 23:52:23.698: INFO: PersistentVolumeClaim pvc-ntztr found and phase=Bound (2.054810863s)
STEP: Deleting the previously created pod
Oct  6 23:52:37.841: INFO: Deleting pod "pvc-volume-tester-8l77s" in namespace "csi-mock-volumes-5233"
Oct  6 23:52:37.868: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8l77s" to be fully deleted
STEP: Checking CSI driver logs
Oct  6 23:52:39.948: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/c75de848-9d5f-4c4f-b83b-36e4d7c4daac/volumes/kubernetes.io~csi/pvc-95b8c019-e68b-449e-85bb-2027abc5889f/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-8l77s
Oct  6 23:52:39.948: INFO: Deleting pod "pvc-volume-tester-8l77s" in namespace "csi-mock-volumes-5233"
STEP: Deleting claim pvc-ntztr
Oct  6 23:52:40.024: INFO: Waiting up to 2m0s for PersistentVolume pvc-95b8c019-e68b-449e-85bb-2027abc5889f to get deleted
Oct  6 23:52:40.049: INFO: PersistentVolume pvc-95b8c019-e68b-449e-85bb-2027abc5889f found and phase=Released (25.675558ms)
Oct  6 23:52:42.074: INFO: PersistentVolume pvc-95b8c019-e68b-449e-85bb-2027abc5889f found and phase=Released (2.050451597s)
... skipping 66 lines ...
STEP: Destroying namespace "apply-4517" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":7,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:01.758: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 14 lines ...
      Driver windows-gcepd doesn't support  -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":5,"skipped":16,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:52:44.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
• [SLOW TEST:17.375 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":16,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:01.831: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:52:50.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 43 lines ...
• [SLOW TEST:11.341 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:02.010: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 125 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
• [SLOW TEST:60.256 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:07.111: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 41 lines ...
Oct  6 23:52:56.494: INFO: PersistentVolumeClaim pvc-74rqc found but phase is Pending instead of Bound.
Oct  6 23:52:58.522: INFO: PersistentVolumeClaim pvc-74rqc found and phase=Bound (2.060152027s)
Oct  6 23:52:58.522: INFO: Waiting up to 3m0s for PersistentVolume local-msvb4 to have phase Bound
Oct  6 23:52:58.545: INFO: PersistentVolume local-msvb4 found and phase=Bound (23.200798ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-26vw
STEP: Creating a pod to test exec-volume-test
Oct  6 23:52:58.623: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-26vw" in namespace "volume-1021" to be "Succeeded or Failed"
Oct  6 23:52:58.651: INFO: Pod "exec-volume-test-preprovisionedpv-26vw": Phase="Pending", Reason="", readiness=false. Elapsed: 28.129963ms
Oct  6 23:53:00.675: INFO: Pod "exec-volume-test-preprovisionedpv-26vw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051997452s
Oct  6 23:53:02.703: INFO: Pod "exec-volume-test-preprovisionedpv-26vw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080265799s
Oct  6 23:53:04.730: INFO: Pod "exec-volume-test-preprovisionedpv-26vw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107377111s
Oct  6 23:53:06.761: INFO: Pod "exec-volume-test-preprovisionedpv-26vw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138184731s
Oct  6 23:53:08.785: INFO: Pod "exec-volume-test-preprovisionedpv-26vw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.162663684s
STEP: Saw pod success
Oct  6 23:53:08.786: INFO: Pod "exec-volume-test-preprovisionedpv-26vw" satisfied condition "Succeeded or Failed"
Oct  6 23:53:08.809: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod exec-volume-test-preprovisionedpv-26vw container exec-container-preprovisionedpv-26vw: <nil>
STEP: delete the pod
Oct  6 23:53:08.869: INFO: Waiting for pod exec-volume-test-preprovisionedpv-26vw to disappear
Oct  6 23:53:08.904: INFO: Pod exec-volume-test-preprovisionedpv-26vw no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-26vw
Oct  6 23:53:08.904: INFO: Deleting pod "exec-volume-test-preprovisionedpv-26vw" in namespace "volume-1021"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":31,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","total":-1,"completed":2,"skipped":26,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:52:59.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 63 lines ...
STEP: Destroying namespace "apply-2263" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":3,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 64 lines ...
Oct  6 23:51:54.987: INFO: PersistentVolumeClaim csi-hostpath8tnq7 found but phase is Pending instead of Bound.
Oct  6 23:51:57.028: INFO: PersistentVolumeClaim csi-hostpath8tnq7 found but phase is Pending instead of Bound.
Oct  6 23:51:59.060: INFO: PersistentVolumeClaim csi-hostpath8tnq7 found but phase is Pending instead of Bound.
Oct  6 23:52:01.084: INFO: PersistentVolumeClaim csi-hostpath8tnq7 found and phase=Bound (14.22845854s)
STEP: Creating pod pod-subpath-test-dynamicpv-kp5c
STEP: Creating a pod to test atomic-volume-subpath
Oct  6 23:52:01.160: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-kp5c" in namespace "provisioning-6654" to be "Succeeded or Failed"
Oct  6 23:52:01.188: INFO: Pod "pod-subpath-test-dynamicpv-kp5c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.779577ms
Oct  6 23:52:03.218: INFO: Pod "pod-subpath-test-dynamicpv-kp5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057698273s
Oct  6 23:52:05.243: INFO: Pod "pod-subpath-test-dynamicpv-kp5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082855448s
Oct  6 23:52:07.267: INFO: Pod "pod-subpath-test-dynamicpv-kp5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107194345s
Oct  6 23:52:09.294: INFO: Pod "pod-subpath-test-dynamicpv-kp5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133800891s
Oct  6 23:52:11.322: INFO: Pod "pod-subpath-test-dynamicpv-kp5c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.162018409s
... skipping 16 lines ...
Oct  6 23:52:45.782: INFO: Pod "pod-subpath-test-dynamicpv-kp5c": Phase="Running", Reason="", readiness=true. Elapsed: 44.622355257s
Oct  6 23:52:47.809: INFO: Pod "pod-subpath-test-dynamicpv-kp5c": Phase="Running", Reason="", readiness=true. Elapsed: 46.648908344s
Oct  6 23:52:49.840: INFO: Pod "pod-subpath-test-dynamicpv-kp5c": Phase="Running", Reason="", readiness=true. Elapsed: 48.680018517s
Oct  6 23:52:51.866: INFO: Pod "pod-subpath-test-dynamicpv-kp5c": Phase="Running", Reason="", readiness=true. Elapsed: 50.706335937s
Oct  6 23:52:53.893: INFO: Pod "pod-subpath-test-dynamicpv-kp5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 52.733380958s
STEP: Saw pod success
Oct  6 23:52:53.893: INFO: Pod "pod-subpath-test-dynamicpv-kp5c" satisfied condition "Succeeded or Failed"
Oct  6 23:52:53.919: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-subpath-test-dynamicpv-kp5c container test-container-subpath-dynamicpv-kp5c: <nil>
STEP: delete the pod
Oct  6 23:52:53.996: INFO: Waiting for pod pod-subpath-test-dynamicpv-kp5c to disappear
Oct  6 23:52:54.020: INFO: Pod pod-subpath-test-dynamicpv-kp5c no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-kp5c
Oct  6 23:52:54.020: INFO: Deleting pod "pod-subpath-test-dynamicpv-kp5c" in namespace "provisioning-6654"
... skipping 154 lines ...
Oct  6 23:53:01.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  6 23:53:01.256: INFO: Waiting up to 5m0s for pod "downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441" in namespace "downward-api-5412" to be "Succeeded or Failed"
Oct  6 23:53:01.280: INFO: Pod "downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441": Phase="Pending", Reason="", readiness=false. Elapsed: 23.742383ms
Oct  6 23:53:03.310: INFO: Pod "downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054296425s
Oct  6 23:53:05.339: INFO: Pod "downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082926529s
Oct  6 23:53:07.375: INFO: Pod "downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119156999s
Oct  6 23:53:09.399: INFO: Pod "downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143060743s
Oct  6 23:53:11.425: INFO: Pod "downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441": Phase="Pending", Reason="", readiness=false. Elapsed: 10.169643829s
Oct  6 23:53:13.451: INFO: Pod "downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441": Phase="Pending", Reason="", readiness=false. Elapsed: 12.19475995s
Oct  6 23:53:15.477: INFO: Pod "downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.220725529s
STEP: Saw pod success
Oct  6 23:53:15.477: INFO: Pod "downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441" satisfied condition "Succeeded or Failed"
Oct  6 23:53:15.501: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441 container dapi-container: <nil>
STEP: delete the pod
Oct  6 23:53:15.559: INFO: Waiting for pod downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441 to disappear
Oct  6 23:53:15.593: INFO: Pod downward-api-13dd0c7a-55dd-4939-b477-1652c5a77441 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.547 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:15.680: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 113 lines ...
Oct  6 23:53:01.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  6 23:53:01.447: INFO: Waiting up to 5m0s for pod "downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4" in namespace "downward-api-3848" to be "Succeeded or Failed"
Oct  6 23:53:01.490: INFO: Pod "downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4": Phase="Pending", Reason="", readiness=false. Elapsed: 43.639127ms
Oct  6 23:53:03.515: INFO: Pod "downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068546175s
Oct  6 23:53:05.539: INFO: Pod "downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092550912s
Oct  6 23:53:07.570: INFO: Pod "downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123367959s
Oct  6 23:53:09.599: INFO: Pod "downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151954876s
Oct  6 23:53:11.626: INFO: Pod "downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179043019s
Oct  6 23:53:13.656: INFO: Pod "downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.209332663s
Oct  6 23:53:15.683: INFO: Pod "downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.236834523s
STEP: Saw pod success
Oct  6 23:53:15.684: INFO: Pod "downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4" satisfied condition "Succeeded or Failed"
Oct  6 23:53:15.708: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4 container dapi-container: <nil>
STEP: delete the pod
Oct  6 23:53:15.768: INFO: Waiting for pod downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4 to disappear
Oct  6 23:53:15.792: INFO: Pod downward-api-db15b4c4-c665-4a3d-aa77-b2bbc54322f4 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.577 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":12,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
... skipping 9 lines ...
Oct  6 23:52:23.206: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-1900bpp9k
STEP: creating a claim
Oct  6 23:52:23.232: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-crd6
STEP: Creating a pod to test exec-volume-test
Oct  6 23:52:23.346: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-crd6" in namespace "volume-1900" to be "Succeeded or Failed"
Oct  6 23:52:23.371: INFO: Pod "exec-volume-test-dynamicpv-crd6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.707539ms
Oct  6 23:52:25.395: INFO: Pod "exec-volume-test-dynamicpv-crd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048858779s
Oct  6 23:52:27.420: INFO: Pod "exec-volume-test-dynamicpv-crd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073665707s
Oct  6 23:52:29.444: INFO: Pod "exec-volume-test-dynamicpv-crd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098098556s
Oct  6 23:52:31.468: INFO: Pod "exec-volume-test-dynamicpv-crd6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122360798s
Oct  6 23:52:33.516: INFO: Pod "exec-volume-test-dynamicpv-crd6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.169372358s
... skipping 7 lines ...
Oct  6 23:52:49.719: INFO: Pod "exec-volume-test-dynamicpv-crd6": Phase="Pending", Reason="", readiness=false. Elapsed: 26.373332189s
Oct  6 23:52:51.744: INFO: Pod "exec-volume-test-dynamicpv-crd6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.39797206s
Oct  6 23:52:53.768: INFO: Pod "exec-volume-test-dynamicpv-crd6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.422227067s
Oct  6 23:52:55.796: INFO: Pod "exec-volume-test-dynamicpv-crd6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.44959321s
Oct  6 23:52:57.827: INFO: Pod "exec-volume-test-dynamicpv-crd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.481081467s
STEP: Saw pod success
Oct  6 23:52:57.827: INFO: Pod "exec-volume-test-dynamicpv-crd6" satisfied condition "Succeeded or Failed"
Oct  6 23:52:57.856: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod exec-volume-test-dynamicpv-crd6 container exec-container-dynamicpv-crd6: <nil>
STEP: delete the pod
Oct  6 23:52:57.971: INFO: Waiting for pod exec-volume-test-dynamicpv-crd6 to disappear
Oct  6 23:52:58.002: INFO: Pod exec-volume-test-dynamicpv-crd6 no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-crd6
Oct  6 23:52:58.002: INFO: Deleting pod "exec-volume-test-dynamicpv-crd6" in namespace "volume-1900"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":15,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:18.375: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 23 lines ...
Oct  6 23:53:01.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct  6 23:53:02.007: INFO: Waiting up to 5m0s for pod "pod-9777f488-3c50-4fd0-ab0f-9a1af2110513" in namespace "emptydir-1557" to be "Succeeded or Failed"
Oct  6 23:53:02.031: INFO: Pod "pod-9777f488-3c50-4fd0-ab0f-9a1af2110513": Phase="Pending", Reason="", readiness=false. Elapsed: 23.575277ms
Oct  6 23:53:04.054: INFO: Pod "pod-9777f488-3c50-4fd0-ab0f-9a1af2110513": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046761741s
Oct  6 23:53:06.079: INFO: Pod "pod-9777f488-3c50-4fd0-ab0f-9a1af2110513": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071599355s
Oct  6 23:53:08.107: INFO: Pod "pod-9777f488-3c50-4fd0-ab0f-9a1af2110513": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099785799s
Oct  6 23:53:10.134: INFO: Pod "pod-9777f488-3c50-4fd0-ab0f-9a1af2110513": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12686685s
Oct  6 23:53:12.159: INFO: Pod "pod-9777f488-3c50-4fd0-ab0f-9a1af2110513": Phase="Pending", Reason="", readiness=false. Elapsed: 10.151881488s
Oct  6 23:53:14.183: INFO: Pod "pod-9777f488-3c50-4fd0-ab0f-9a1af2110513": Phase="Pending", Reason="", readiness=false. Elapsed: 12.175929619s
Oct  6 23:53:16.212: INFO: Pod "pod-9777f488-3c50-4fd0-ab0f-9a1af2110513": Phase="Pending", Reason="", readiness=false. Elapsed: 14.204906163s
Oct  6 23:53:18.237: INFO: Pod "pod-9777f488-3c50-4fd0-ab0f-9a1af2110513": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.229927742s
STEP: Saw pod success
Oct  6 23:53:18.237: INFO: Pod "pod-9777f488-3c50-4fd0-ab0f-9a1af2110513" satisfied condition "Succeeded or Failed"
Oct  6 23:53:18.261: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod pod-9777f488-3c50-4fd0-ab0f-9a1af2110513 container test-container: <nil>
STEP: delete the pod
Oct  6 23:53:18.350: INFO: Waiting for pod pod-9777f488-3c50-4fd0-ab0f-9a1af2110513 to disappear
Oct  6 23:53:18.373: INFO: Pod pod-9777f488-3c50-4fd0-ab0f-9a1af2110513 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:16.571 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:18.454: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 102 lines ...
      Only supported for providers [vsphere] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:53:12.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Oct  6 23:53:12.862: INFO: Waiting up to 5m0s for pod "client-containers-01875f5a-6f2d-4169-9372-f3cd10102004" in namespace "containers-9503" to be "Succeeded or Failed"
Oct  6 23:53:12.894: INFO: Pod "client-containers-01875f5a-6f2d-4169-9372-f3cd10102004": Phase="Pending", Reason="", readiness=false. Elapsed: 32.244199ms
Oct  6 23:53:14.928: INFO: Pod "client-containers-01875f5a-6f2d-4169-9372-f3cd10102004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066612887s
Oct  6 23:53:16.954: INFO: Pod "client-containers-01875f5a-6f2d-4169-9372-f3cd10102004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092259503s
Oct  6 23:53:18.980: INFO: Pod "client-containers-01875f5a-6f2d-4169-9372-f3cd10102004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117898195s
STEP: Saw pod success
Oct  6 23:53:18.980: INFO: Pod "client-containers-01875f5a-6f2d-4169-9372-f3cd10102004" satisfied condition "Succeeded or Failed"
Oct  6 23:53:19.004: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod client-containers-01875f5a-6f2d-4169-9372-f3cd10102004 container agnhost-container: <nil>
STEP: delete the pod
Oct  6 23:53:19.088: INFO: Waiting for pod client-containers-01875f5a-6f2d-4169-9372-f3cd10102004 to disappear
Oct  6 23:53:19.113: INFO: Pod client-containers-01875f5a-6f2d-4169-9372-f3cd10102004 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.477 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:19.183: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 260 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":8,"skipped":53,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:19.813: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:53:20.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-8785" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":9,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 68 lines ...
Oct  6 23:51:52.090: INFO: PersistentVolumeClaim csi-hostpathzflmt found but phase is Pending instead of Bound.
Oct  6 23:51:54.114: INFO: PersistentVolumeClaim csi-hostpathzflmt found but phase is Pending instead of Bound.
Oct  6 23:51:56.137: INFO: PersistentVolumeClaim csi-hostpathzflmt found but phase is Pending instead of Bound.
Oct  6 23:51:58.165: INFO: PersistentVolumeClaim csi-hostpathzflmt found and phase=Bound (22.370482817s)
STEP: Creating pod pod-subpath-test-dynamicpv-rz7d
STEP: Creating a pod to test subpath
Oct  6 23:51:58.242: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rz7d" in namespace "provisioning-7858" to be "Succeeded or Failed"
Oct  6 23:51:58.270: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.991413ms
Oct  6 23:52:00.295: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053012325s
Oct  6 23:52:02.318: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076164403s
Oct  6 23:52:04.374: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131972084s
Oct  6 23:52:06.400: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158336495s
Oct  6 23:52:08.426: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.183904357s
... skipping 3 lines ...
Oct  6 23:52:16.555: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.313834632s
Oct  6 23:52:18.583: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.340922332s
Oct  6 23:52:20.608: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.366108736s
Oct  6 23:52:22.634: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.392445824s
Oct  6 23:52:24.660: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.418047561s
STEP: Saw pod success
Oct  6 23:52:24.660: INFO: Pod "pod-subpath-test-dynamicpv-rz7d" satisfied condition "Succeeded or Failed"
Oct  6 23:52:24.684: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-subpath-test-dynamicpv-rz7d container test-container-subpath-dynamicpv-rz7d: <nil>
STEP: delete the pod
Oct  6 23:52:24.749: INFO: Waiting for pod pod-subpath-test-dynamicpv-rz7d to disappear
Oct  6 23:52:24.772: INFO: Pod pod-subpath-test-dynamicpv-rz7d no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rz7d
Oct  6 23:52:24.772: INFO: Deleting pod "pod-subpath-test-dynamicpv-rz7d" in namespace "provisioning-7858"
STEP: Creating pod pod-subpath-test-dynamicpv-rz7d
STEP: Creating a pod to test subpath
Oct  6 23:52:24.828: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rz7d" in namespace "provisioning-7858" to be "Succeeded or Failed"
Oct  6 23:52:24.851: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.893325ms
Oct  6 23:52:26.875: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046723023s
Oct  6 23:52:28.899: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070477103s
Oct  6 23:52:30.922: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093831771s
Oct  6 23:52:32.947: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118679746s
Oct  6 23:52:34.970: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.142042867s
... skipping 6 lines ...
Oct  6 23:52:49.143: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.31519119s
Oct  6 23:52:51.170: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.341968551s
Oct  6 23:52:53.194: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.365983201s
Oct  6 23:52:55.217: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.388625395s
Oct  6 23:52:57.246: INFO: Pod "pod-subpath-test-dynamicpv-rz7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.417921757s
STEP: Saw pod success
Oct  6 23:52:57.246: INFO: Pod "pod-subpath-test-dynamicpv-rz7d" satisfied condition "Succeeded or Failed"
Oct  6 23:52:57.269: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-subpath-test-dynamicpv-rz7d container test-container-subpath-dynamicpv-rz7d: <nil>
STEP: delete the pod
Oct  6 23:52:57.336: INFO: Waiting for pod pod-subpath-test-dynamicpv-rz7d to disappear
Oct  6 23:52:57.362: INFO: Pod pod-subpath-test-dynamicpv-rz7d no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rz7d
Oct  6 23:52:57.362: INFO: Deleting pod "pod-subpath-test-dynamicpv-rz7d" in namespace "provisioning-7858"
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:53:21.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apf
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
W1006 23:52:31.819732    5643 gce_instances.go:410] Cloud object does not have informers set, should only happen in E2E binary.
Oct  6 23:52:33.448: INFO: Successfully created a new PD: "e2e-a9b9d475-dff7-4647-b630-abc70eee18da".
Oct  6 23:52:33.449: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-8gj6
STEP: Creating a pod to test exec-volume-test
W1006 23:52:33.477502    5643 warnings.go:70] spec.nodeSelector[failure-domain.beta.kubernetes.io/zone]: deprecated since v1.17; use "topology.kubernetes.io/zone" instead
Oct  6 23:52:33.477: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-8gj6" in namespace "volume-273" to be "Succeeded or Failed"
Oct  6 23:52:33.515: INFO: Pod "exec-volume-test-inlinevolume-8gj6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.088281ms
Oct  6 23:52:35.539: INFO: Pod "exec-volume-test-inlinevolume-8gj6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061850458s
Oct  6 23:52:37.564: INFO: Pod "exec-volume-test-inlinevolume-8gj6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086889461s
Oct  6 23:52:39.588: INFO: Pod "exec-volume-test-inlinevolume-8gj6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110463143s
Oct  6 23:52:41.612: INFO: Pod "exec-volume-test-inlinevolume-8gj6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134983509s
Oct  6 23:52:43.636: INFO: Pod "exec-volume-test-inlinevolume-8gj6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15832454s
... skipping 3 lines ...
Oct  6 23:52:51.734: INFO: Pod "exec-volume-test-inlinevolume-8gj6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.256740751s
Oct  6 23:52:53.757: INFO: Pod "exec-volume-test-inlinevolume-8gj6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.280020377s
Oct  6 23:52:55.781: INFO: Pod "exec-volume-test-inlinevolume-8gj6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.303283851s
Oct  6 23:52:57.808: INFO: Pod "exec-volume-test-inlinevolume-8gj6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.330270333s
Oct  6 23:52:59.836: INFO: Pod "exec-volume-test-inlinevolume-8gj6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.358573855s
STEP: Saw pod success
Oct  6 23:52:59.836: INFO: Pod "exec-volume-test-inlinevolume-8gj6" satisfied condition "Succeeded or Failed"
Oct  6 23:52:59.859: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod exec-volume-test-inlinevolume-8gj6 container exec-container-inlinevolume-8gj6: <nil>
STEP: delete the pod
Oct  6 23:52:59.934: INFO: Waiting for pod exec-volume-test-inlinevolume-8gj6 to disappear
Oct  6 23:52:59.957: INFO: Pod exec-volume-test-inlinevolume-8gj6 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-8gj6
Oct  6 23:52:59.957: INFO: Deleting pod "exec-volume-test-inlinevolume-8gj6" in namespace "volume-273"
Oct  6 23:53:00.554: INFO: error deleting PD "e2e-a9b9d475-dff7-4647-b630-abc70eee18da": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-a9b9d475-dff7-4647-b630-abc70eee18da' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk', resourceInUseByAnotherResource
Oct  6 23:53:00.554: INFO: Couldn't delete PD "e2e-a9b9d475-dff7-4647-b630-abc70eee18da", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-a9b9d475-dff7-4647-b630-abc70eee18da' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk', resourceInUseByAnotherResource
Oct  6 23:53:06.064: INFO: error deleting PD "e2e-a9b9d475-dff7-4647-b630-abc70eee18da": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-a9b9d475-dff7-4647-b630-abc70eee18da' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk', resourceInUseByAnotherResource
Oct  6 23:53:06.064: INFO: Couldn't delete PD "e2e-a9b9d475-dff7-4647-b630-abc70eee18da", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-a9b9d475-dff7-4647-b630-abc70eee18da' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk', resourceInUseByAnotherResource
Oct  6 23:53:11.641: INFO: error deleting PD "e2e-a9b9d475-dff7-4647-b630-abc70eee18da": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-a9b9d475-dff7-4647-b630-abc70eee18da' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk', resourceInUseByAnotherResource
Oct  6 23:53:11.641: INFO: Couldn't delete PD "e2e-a9b9d475-dff7-4647-b630-abc70eee18da", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-a9b9d475-dff7-4647-b630-abc70eee18da' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk', resourceInUseByAnotherResource
Oct  6 23:53:17.215: INFO: error deleting PD "e2e-a9b9d475-dff7-4647-b630-abc70eee18da": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-a9b9d475-dff7-4647-b630-abc70eee18da' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk', resourceInUseByAnotherResource
Oct  6 23:53:17.215: INFO: Couldn't delete PD "e2e-a9b9d475-dff7-4647-b630-abc70eee18da", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-a9b9d475-dff7-4647-b630-abc70eee18da' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-vcbk', resourceInUseByAnotherResource
Oct  6 23:53:24.011: INFO: Successfully deleted PD "e2e-a9b9d475-dff7-4647-b630-abc70eee18da".
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:53:24.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-273" for this suite.

... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext3)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:24.098: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":3,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:53:01.286: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
Oct  6 23:53:12.265: INFO: PersistentVolumeClaim pvc-q5ctd found but phase is Pending instead of Bound.
Oct  6 23:53:14.296: INFO: PersistentVolumeClaim pvc-q5ctd found and phase=Bound (8.142167922s)
Oct  6 23:53:14.296: INFO: Waiting up to 3m0s for PersistentVolume local-fjxdq to have phase Bound
Oct  6 23:53:14.321: INFO: PersistentVolume local-fjxdq found and phase=Bound (24.304143ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lnc8
STEP: Creating a pod to test subpath
Oct  6 23:53:14.396: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lnc8" in namespace "provisioning-8055" to be "Succeeded or Failed"
Oct  6 23:53:14.421: INFO: Pod "pod-subpath-test-preprovisionedpv-lnc8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.210557ms
Oct  6 23:53:16.450: INFO: Pod "pod-subpath-test-preprovisionedpv-lnc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053340424s
Oct  6 23:53:18.477: INFO: Pod "pod-subpath-test-preprovisionedpv-lnc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080434125s
Oct  6 23:53:20.505: INFO: Pod "pod-subpath-test-preprovisionedpv-lnc8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108813356s
Oct  6 23:53:22.533: INFO: Pod "pod-subpath-test-preprovisionedpv-lnc8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13622051s
Oct  6 23:53:24.560: INFO: Pod "pod-subpath-test-preprovisionedpv-lnc8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.163941815s
Oct  6 23:53:26.586: INFO: Pod "pod-subpath-test-preprovisionedpv-lnc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.189464878s
STEP: Saw pod success
Oct  6 23:53:26.586: INFO: Pod "pod-subpath-test-preprovisionedpv-lnc8" satisfied condition "Succeeded or Failed"
Oct  6 23:53:26.613: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-subpath-test-preprovisionedpv-lnc8 container test-container-subpath-preprovisionedpv-lnc8: <nil>
STEP: delete the pod
Oct  6 23:53:26.738: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lnc8 to disappear
Oct  6 23:53:26.763: INFO: Pod pod-subpath-test-preprovisionedpv-lnc8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lnc8
Oct  6 23:53:26.763: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lnc8" in namespace "provisioning-8055"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:27.666: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 36 lines ...
• [SLOW TEST:14.028 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:53:18.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct  6 23:53:18.699: INFO: Waiting up to 5m0s for pod "security-context-41417a84-103f-4b85-bf49-c4f88e1c625a" in namespace "security-context-4729" to be "Succeeded or Failed"
Oct  6 23:53:18.722: INFO: Pod "security-context-41417a84-103f-4b85-bf49-c4f88e1c625a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.739898ms
Oct  6 23:53:20.749: INFO: Pod "security-context-41417a84-103f-4b85-bf49-c4f88e1c625a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050122877s
Oct  6 23:53:22.774: INFO: Pod "security-context-41417a84-103f-4b85-bf49-c4f88e1c625a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074828333s
Oct  6 23:53:24.797: INFO: Pod "security-context-41417a84-103f-4b85-bf49-c4f88e1c625a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097598368s
Oct  6 23:53:26.821: INFO: Pod "security-context-41417a84-103f-4b85-bf49-c4f88e1c625a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121928428s
Oct  6 23:53:28.845: INFO: Pod "security-context-41417a84-103f-4b85-bf49-c4f88e1c625a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.145640797s
Oct  6 23:53:30.868: INFO: Pod "security-context-41417a84-103f-4b85-bf49-c4f88e1c625a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.169424281s
Oct  6 23:53:32.896: INFO: Pod "security-context-41417a84-103f-4b85-bf49-c4f88e1c625a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.197127553s
STEP: Saw pod success
Oct  6 23:53:32.896: INFO: Pod "security-context-41417a84-103f-4b85-bf49-c4f88e1c625a" satisfied condition "Succeeded or Failed"
Oct  6 23:53:32.920: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod security-context-41417a84-103f-4b85-bf49-c4f88e1c625a container test-container: <nil>
STEP: delete the pod
Oct  6 23:53:33.004: INFO: Waiting for pod security-context-41417a84-103f-4b85-bf49-c4f88e1c625a to disappear
Oct  6 23:53:33.038: INFO: Pod security-context-41417a84-103f-4b85-bf49-c4f88e1c625a no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.570 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":43,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:33.142: INFO: Only supported for providers [openstack] (not gce)
... skipping 138 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:166
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":3,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:34.724: INFO: Only supported for providers [vsphere] (not gce)
... skipping 86 lines ...
• [SLOW TEST:19.003 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:34.852: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 81 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:53:12.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:26.264 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:51
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:38.819: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:53:39.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2854" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:6.888 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:40.083: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
      Driver windows-gcepd doesn't support  -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:52:59.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename endpointslice
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 38 lines ...
      Driver local doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:40.168: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 342 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":10,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:53:23.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
• [SLOW TEST:18.935 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":4,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:53:40.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct  6 23:53:40.341: INFO: Waiting up to 5m0s for pod "security-context-4e9d7c7d-7ad3-4b85-a14a-25b05ae1575f" in namespace "security-context-1945" to be "Succeeded or Failed"
Oct  6 23:53:40.364: INFO: Pod "security-context-4e9d7c7d-7ad3-4b85-a14a-25b05ae1575f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.937916ms
Oct  6 23:53:42.388: INFO: Pod "security-context-4e9d7c7d-7ad3-4b85-a14a-25b05ae1575f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047072845s
Oct  6 23:53:44.413: INFO: Pod "security-context-4e9d7c7d-7ad3-4b85-a14a-25b05ae1575f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071493201s
Oct  6 23:53:46.437: INFO: Pod "security-context-4e9d7c7d-7ad3-4b85-a14a-25b05ae1575f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096118784s
STEP: Saw pod success
Oct  6 23:53:46.438: INFO: Pod "security-context-4e9d7c7d-7ad3-4b85-a14a-25b05ae1575f" satisfied condition "Succeeded or Failed"
Oct  6 23:53:46.460: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod security-context-4e9d7c7d-7ad3-4b85-a14a-25b05ae1575f container test-container: <nil>
STEP: delete the pod
Oct  6 23:53:46.524: INFO: Waiting for pod security-context-4e9d7c7d-7ad3-4b85-a14a-25b05ae1575f to disappear
Oct  6 23:53:46.552: INFO: Pod security-context-4e9d7c7d-7ad3-4b85-a14a-25b05ae1575f no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.431 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":26,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  6 23:53:42.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb7b3893-52d1-4b3e-8ea6-61489a7bcd97" in namespace "projected-1849" to be "Succeeded or Failed"
Oct  6 23:53:42.534: INFO: Pod "downwardapi-volume-bb7b3893-52d1-4b3e-8ea6-61489a7bcd97": Phase="Pending", Reason="", readiness=false. Elapsed: 24.540295ms
Oct  6 23:53:44.557: INFO: Pod "downwardapi-volume-bb7b3893-52d1-4b3e-8ea6-61489a7bcd97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048270403s
Oct  6 23:53:46.593: INFO: Pod "downwardapi-volume-bb7b3893-52d1-4b3e-8ea6-61489a7bcd97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084332438s
STEP: Saw pod success
Oct  6 23:53:46.594: INFO: Pod "downwardapi-volume-bb7b3893-52d1-4b3e-8ea6-61489a7bcd97" satisfied condition "Succeeded or Failed"
Oct  6 23:53:46.623: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod downwardapi-volume-bb7b3893-52d1-4b3e-8ea6-61489a7bcd97 container client-container: <nil>
STEP: delete the pod
Oct  6 23:53:46.706: INFO: Waiting for pod downwardapi-volume-bb7b3893-52d1-4b3e-8ea6-61489a7bcd97 to disappear
Oct  6 23:53:46.730: INFO: Pod downwardapi-volume-bb7b3893-52d1-4b3e-8ea6-61489a7bcd97 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:53:46.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1849" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:46.793: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
Oct  6 23:53:11.961: INFO: PersistentVolumeClaim pvc-26b5p found but phase is Pending instead of Bound.
Oct  6 23:53:13.988: INFO: PersistentVolumeClaim pvc-26b5p found and phase=Bound (8.155768431s)
Oct  6 23:53:13.988: INFO: Waiting up to 3m0s for PersistentVolume local-7644t to have phase Bound
Oct  6 23:53:14.012: INFO: PersistentVolume local-7644t found and phase=Bound (24.341062ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9c7g
STEP: Creating a pod to test atomic-volume-subpath
Oct  6 23:53:14.101: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9c7g" in namespace "provisioning-7949" to be "Succeeded or Failed"
Oct  6 23:53:14.125: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g": Phase="Pending", Reason="", readiness=false. Elapsed: 23.750794ms
Oct  6 23:53:16.153: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051352327s
Oct  6 23:53:18.178: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076514059s
Oct  6 23:53:20.204: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102682945s
Oct  6 23:53:22.231: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129810759s
Oct  6 23:53:24.257: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15497202s
... skipping 6 lines ...
Oct  6 23:53:38.442: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g": Phase="Running", Reason="", readiness=true. Elapsed: 24.340065409s
Oct  6 23:53:40.467: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g": Phase="Running", Reason="", readiness=true. Elapsed: 26.365265762s
Oct  6 23:53:42.492: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g": Phase="Running", Reason="", readiness=true. Elapsed: 28.390549235s
Oct  6 23:53:44.520: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g": Phase="Running", Reason="", readiness=true. Elapsed: 30.418175812s
Oct  6 23:53:46.550: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.4483073s
STEP: Saw pod success
Oct  6 23:53:46.550: INFO: Pod "pod-subpath-test-preprovisionedpv-9c7g" satisfied condition "Succeeded or Failed"
Oct  6 23:53:46.594: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-subpath-test-preprovisionedpv-9c7g container test-container-subpath-preprovisionedpv-9c7g: <nil>
STEP: delete the pod
Oct  6 23:53:46.707: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9c7g to disappear
Oct  6 23:53:46.734: INFO: Pod pod-subpath-test-preprovisionedpv-9c7g no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9c7g
Oct  6 23:53:46.734: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9c7g" in namespace "provisioning-7949"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":40,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:53:47.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9142" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:47.876: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 70 lines ...
• [SLOW TEST:41.214 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:48.384: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 25 lines ...
Oct  6 23:53:27.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
Oct  6 23:53:27.801: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  6 23:53:27.859: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1748" in namespace "provisioning-1748" to be "Succeeded or Failed"
Oct  6 23:53:27.884: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Pending", Reason="", readiness=false. Elapsed: 25.038497ms
Oct  6 23:53:29.910: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050144258s
Oct  6 23:53:31.935: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075478266s
Oct  6 23:53:33.960: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100084735s
Oct  6 23:53:35.984: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125043276s
Oct  6 23:53:38.011: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Pending", Reason="", readiness=false. Elapsed: 10.151258211s
Oct  6 23:53:40.036: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.177006089s
STEP: Saw pod success
Oct  6 23:53:40.037: INFO: Pod "hostpath-symlink-prep-provisioning-1748" satisfied condition "Succeeded or Failed"
Oct  6 23:53:40.037: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1748" in namespace "provisioning-1748"
Oct  6 23:53:40.071: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1748" to be fully deleted
Oct  6 23:53:40.094: INFO: Creating resource for inline volume
Oct  6 23:53:40.095: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Oct  6 23:53:40.095: INFO: Deleting pod "pod-subpath-test-inlinevolume-rwlk" in namespace "provisioning-1748"
Oct  6 23:53:40.154: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1748" in namespace "provisioning-1748" to be "Succeeded or Failed"
Oct  6 23:53:40.179: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Pending", Reason="", readiness=false. Elapsed: 24.646826ms
Oct  6 23:53:42.206: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051913692s
Oct  6 23:53:44.233: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078537507s
Oct  6 23:53:46.259: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104785385s
Oct  6 23:53:48.285: INFO: Pod "hostpath-symlink-prep-provisioning-1748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131077799s
STEP: Saw pod success
Oct  6 23:53:48.285: INFO: Pod "hostpath-symlink-prep-provisioning-1748" satisfied condition "Succeeded or Failed"
Oct  6 23:53:48.285: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1748" in namespace "provisioning-1748"
Oct  6 23:53:48.318: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1748" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:53:48.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-1748" for this suite.
... skipping 37 lines ...
Oct  6 23:53:10.766: INFO: PersistentVolumeClaim pvc-x45lx found but phase is Pending instead of Bound.
Oct  6 23:53:12.794: INFO: PersistentVolumeClaim pvc-x45lx found and phase=Bound (6.107563213s)
Oct  6 23:53:12.794: INFO: Waiting up to 3m0s for PersistentVolume gcepd-7tgqm to have phase Bound
Oct  6 23:53:12.819: INFO: PersistentVolume gcepd-7tgqm found and phase=Bound (24.578806ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-2hh8
STEP: Creating a pod to test exec-volume-test
Oct  6 23:53:12.896: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-2hh8" in namespace "volume-7064" to be "Succeeded or Failed"
Oct  6 23:53:12.919: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.643226ms
Oct  6 23:53:14.946: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050749129s
Oct  6 23:53:16.973: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077645265s
Oct  6 23:53:18.998: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102645273s
Oct  6 23:53:21.026: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129849456s
Oct  6 23:53:23.053: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157484264s
Oct  6 23:53:25.079: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.183431551s
Oct  6 23:53:27.105: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.209492463s
Oct  6 23:53:29.131: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.235335159s
Oct  6 23:53:31.157: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.261725657s
Oct  6 23:53:33.183: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.287196164s
Oct  6 23:53:35.208: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.312099367s
STEP: Saw pod success
Oct  6 23:53:35.208: INFO: Pod "exec-volume-test-preprovisionedpv-2hh8" satisfied condition "Succeeded or Failed"
Oct  6 23:53:35.233: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod exec-volume-test-preprovisionedpv-2hh8 container exec-container-preprovisionedpv-2hh8: <nil>
STEP: delete the pod
Oct  6 23:53:35.293: INFO: Waiting for pod exec-volume-test-preprovisionedpv-2hh8 to disappear
Oct  6 23:53:35.317: INFO: Pod exec-volume-test-preprovisionedpv-2hh8 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-2hh8
Oct  6 23:53:35.317: INFO: Deleting pod "exec-volume-test-preprovisionedpv-2hh8" in namespace "volume-7064"
STEP: Deleting pv and pvc
Oct  6 23:53:35.342: INFO: Deleting PersistentVolumeClaim "pvc-x45lx"
Oct  6 23:53:35.367: INFO: Deleting PersistentVolume "gcepd-7tgqm"
Oct  6 23:53:36.073: INFO: error deleting PD "e2e-fcf7f61b-4795-4495-ae8f-65c56aa21b42": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-fcf7f61b-4795-4495-ae8f-65c56aa21b42' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d', resourceInUseByAnotherResource
Oct  6 23:53:36.073: INFO: Couldn't delete PD "e2e-fcf7f61b-4795-4495-ae8f-65c56aa21b42", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-fcf7f61b-4795-4495-ae8f-65c56aa21b42' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d', resourceInUseByAnotherResource
Oct  6 23:53:41.594: INFO: error deleting PD "e2e-fcf7f61b-4795-4495-ae8f-65c56aa21b42": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-fcf7f61b-4795-4495-ae8f-65c56aa21b42' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d', resourceInUseByAnotherResource
Oct  6 23:53:41.594: INFO: Couldn't delete PD "e2e-fcf7f61b-4795-4495-ae8f-65c56aa21b42", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-fcf7f61b-4795-4495-ae8f-65c56aa21b42' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d', resourceInUseByAnotherResource
Oct  6 23:53:48.362: INFO: Successfully deleted PD "e2e-fcf7f61b-4795-4495-ae8f-65c56aa21b42".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:53:48.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7064" for this suite.

... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":53,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:48.461: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 117 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:53:49.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-9306" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":12,"skipped":109,"failed":0}
[BeforeEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:53:48.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-2b244b59-b55b-45f4-8e3f-a6cfa983be9b
STEP: Creating secret with name secret-projected-all-test-volume-5dc40dbe-ce56-4654-8d7d-6195be25e468
STEP: Creating a pod to test Check all projections for projected volume plugin
Oct  6 23:53:48.822: INFO: Waiting up to 5m0s for pod "projected-volume-3507858f-f42b-4a9c-8719-e9bd2798635f" in namespace "projected-8005" to be "Succeeded or Failed"
Oct  6 23:53:48.858: INFO: Pod "projected-volume-3507858f-f42b-4a9c-8719-e9bd2798635f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.355467ms
Oct  6 23:53:50.887: INFO: Pod "projected-volume-3507858f-f42b-4a9c-8719-e9bd2798635f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065039462s
Oct  6 23:53:52.911: INFO: Pod "projected-volume-3507858f-f42b-4a9c-8719-e9bd2798635f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089047497s
Oct  6 23:53:54.936: INFO: Pod "projected-volume-3507858f-f42b-4a9c-8719-e9bd2798635f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113747841s
STEP: Saw pod success
Oct  6 23:53:54.936: INFO: Pod "projected-volume-3507858f-f42b-4a9c-8719-e9bd2798635f" satisfied condition "Succeeded or Failed"
Oct  6 23:53:54.960: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod projected-volume-3507858f-f42b-4a9c-8719-e9bd2798635f container projected-all-volume-test: <nil>
STEP: delete the pod
Oct  6 23:53:55.040: INFO: Waiting for pod projected-volume-3507858f-f42b-4a9c-8719-e9bd2798635f to disappear
Oct  6 23:53:55.072: INFO: Pod projected-volume-3507858f-f42b-4a9c-8719-e9bd2798635f no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.544 seconds]
[sig-storage] Projected combined
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":109,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:55.192: INFO: Only supported for providers [azure] (not gce)
... skipping 108 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:55.381: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 153 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a successful command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:500
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command","total":-1,"completed":10,"skipped":109,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:53:58.331: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 107 lines ...
• [SLOW TEST:24.573 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":57,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  6 23:53:47.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd" in namespace "projected-9426" to be "Succeeded or Failed"
Oct  6 23:53:47.395: INFO: Pod "downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.761627ms
Oct  6 23:53:49.421: INFO: Pod "downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056328985s
Oct  6 23:53:51.447: INFO: Pod "downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082173925s
Oct  6 23:53:53.488: INFO: Pod "downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123146975s
Oct  6 23:53:55.511: INFO: Pod "downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146866621s
Oct  6 23:53:57.536: INFO: Pod "downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.171289726s
Oct  6 23:53:59.560: INFO: Pod "downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.195436665s
STEP: Saw pod success
Oct  6 23:53:59.560: INFO: Pod "downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd" satisfied condition "Succeeded or Failed"
Oct  6 23:53:59.583: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd container client-container: <nil>
STEP: delete the pod
Oct  6 23:53:59.646: INFO: Waiting for pod downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd to disappear
Oct  6 23:53:59.670: INFO: Pod downwardapi-volume-b993545f-b49c-4069-86b1-ce1fafa7d6fd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.514 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:53:58.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct  6 23:53:58.542: INFO: Waiting up to 5m0s for pod "pod-a4c2a465-f0dc-4c13-9b12-6c852f152cf2" in namespace "emptydir-6836" to be "Succeeded or Failed"
Oct  6 23:53:58.565: INFO: Pod "pod-a4c2a465-f0dc-4c13-9b12-6c852f152cf2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.771734ms
Oct  6 23:54:00.589: INFO: Pod "pod-a4c2a465-f0dc-4c13-9b12-6c852f152cf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.046499325s
STEP: Saw pod success
Oct  6 23:54:00.589: INFO: Pod "pod-a4c2a465-f0dc-4c13-9b12-6c852f152cf2" satisfied condition "Succeeded or Failed"
Oct  6 23:54:00.614: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-a4c2a465-f0dc-4c13-9b12-6c852f152cf2 container test-container: <nil>
STEP: delete the pod
Oct  6 23:54:00.676: INFO: Waiting for pod pod-a4c2a465-f0dc-4c13-9b12-6c852f152cf2 to disappear
Oct  6 23:54:00.699: INFO: Pod pod-a4c2a465-f0dc-4c13-9b12-6c852f152cf2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:00.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6836" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":126,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:00.777: INFO: Only supported for providers [openstack] (not gce)
... skipping 79 lines ...
• [SLOW TEST:14.959 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":6,"skipped":39,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:04.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8174" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:04.084: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-nhkp
STEP: Creating a pod to test atomic-volume-subpath
Oct  6 23:53:39.497: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nhkp" in namespace "subpath-9720" to be "Succeeded or Failed"
Oct  6 23:53:39.522: INFO: Pod "pod-subpath-test-configmap-nhkp": Phase="Pending", Reason="", readiness=false. Elapsed: 24.209043ms
Oct  6 23:53:41.546: INFO: Pod "pod-subpath-test-configmap-nhkp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048922022s
Oct  6 23:53:43.574: INFO: Pod "pod-subpath-test-configmap-nhkp": Phase="Running", Reason="", readiness=true. Elapsed: 4.07622477s
Oct  6 23:53:45.603: INFO: Pod "pod-subpath-test-configmap-nhkp": Phase="Running", Reason="", readiness=true. Elapsed: 6.105117386s
Oct  6 23:53:47.628: INFO: Pod "pod-subpath-test-configmap-nhkp": Phase="Running", Reason="", readiness=true. Elapsed: 8.1306455s
Oct  6 23:53:49.653: INFO: Pod "pod-subpath-test-configmap-nhkp": Phase="Running", Reason="", readiness=true. Elapsed: 10.155993792s
... skipping 2 lines ...
Oct  6 23:53:55.740: INFO: Pod "pod-subpath-test-configmap-nhkp": Phase="Running", Reason="", readiness=true. Elapsed: 16.24250589s
Oct  6 23:53:57.767: INFO: Pod "pod-subpath-test-configmap-nhkp": Phase="Running", Reason="", readiness=true. Elapsed: 18.269197276s
Oct  6 23:53:59.791: INFO: Pod "pod-subpath-test-configmap-nhkp": Phase="Running", Reason="", readiness=true. Elapsed: 20.293708589s
Oct  6 23:54:01.816: INFO: Pod "pod-subpath-test-configmap-nhkp": Phase="Running", Reason="", readiness=true. Elapsed: 22.318742324s
Oct  6 23:54:03.843: INFO: Pod "pod-subpath-test-configmap-nhkp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.345815323s
STEP: Saw pod success
Oct  6 23:54:03.843: INFO: Pod "pod-subpath-test-configmap-nhkp" satisfied condition "Succeeded or Failed"
Oct  6 23:54:03.868: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-subpath-test-configmap-nhkp container test-container-subpath-configmap-nhkp: <nil>
STEP: delete the pod
Oct  6 23:54:03.967: INFO: Waiting for pod pod-subpath-test-configmap-nhkp to disappear
Oct  6 23:54:04.001: INFO: Pod pod-subpath-test-configmap-nhkp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-nhkp
Oct  6 23:54:04.002: INFO: Deleting pod "pod-subpath-test-configmap-nhkp" in namespace "subpath-9720"
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:04.152: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 114 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should not modify fsGroup if fsGroupPolicy=None
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":5,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:04.465: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 46 lines ...
Oct  6 23:53:57.335: INFO: PersistentVolumeClaim pvc-7tzxp found but phase is Pending instead of Bound.
Oct  6 23:53:59.358: INFO: PersistentVolumeClaim pvc-7tzxp found and phase=Bound (10.141865232s)
Oct  6 23:53:59.358: INFO: Waiting up to 3m0s for PersistentVolume local-ht684 to have phase Bound
Oct  6 23:53:59.381: INFO: PersistentVolume local-ht684 found and phase=Bound (22.576316ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4fg2
STEP: Creating a pod to test subpath
Oct  6 23:53:59.455: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4fg2" in namespace "provisioning-7345" to be "Succeeded or Failed"
Oct  6 23:53:59.479: INFO: Pod "pod-subpath-test-preprovisionedpv-4fg2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.834211ms
Oct  6 23:54:01.505: INFO: Pod "pod-subpath-test-preprovisionedpv-4fg2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049703033s
Oct  6 23:54:03.530: INFO: Pod "pod-subpath-test-preprovisionedpv-4fg2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074470252s
Oct  6 23:54:05.556: INFO: Pod "pod-subpath-test-preprovisionedpv-4fg2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100622715s
STEP: Saw pod success
Oct  6 23:54:05.556: INFO: Pod "pod-subpath-test-preprovisionedpv-4fg2" satisfied condition "Succeeded or Failed"
Oct  6 23:54:05.581: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-subpath-test-preprovisionedpv-4fg2 container test-container-subpath-preprovisionedpv-4fg2: <nil>
STEP: delete the pod
Oct  6 23:54:05.636: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4fg2 to disappear
Oct  6 23:54:05.659: INFO: Pod pod-subpath-test-preprovisionedpv-4fg2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4fg2
Oct  6 23:54:05.659: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4fg2" in namespace "provisioning-7345"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:06.339: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
Oct  6 23:53:56.219: INFO: PersistentVolumeClaim pvc-64k6k found but phase is Pending instead of Bound.
Oct  6 23:53:58.243: INFO: PersistentVolumeClaim pvc-64k6k found and phase=Bound (4.079695173s)
Oct  6 23:53:58.243: INFO: Waiting up to 3m0s for PersistentVolume local-mh6rm to have phase Bound
Oct  6 23:53:58.273: INFO: PersistentVolume local-mh6rm found and phase=Bound (29.92592ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-cz2z
STEP: Creating a pod to test exec-volume-test
Oct  6 23:53:58.344: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-cz2z" in namespace "volume-4166" to be "Succeeded or Failed"
Oct  6 23:53:58.366: INFO: Pod "exec-volume-test-preprovisionedpv-cz2z": Phase="Pending", Reason="", readiness=false. Elapsed: 22.631417ms
Oct  6 23:54:00.390: INFO: Pod "exec-volume-test-preprovisionedpv-cz2z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046609695s
Oct  6 23:54:02.415: INFO: Pod "exec-volume-test-preprovisionedpv-cz2z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071376828s
Oct  6 23:54:04.452: INFO: Pod "exec-volume-test-preprovisionedpv-cz2z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108540707s
Oct  6 23:54:06.483: INFO: Pod "exec-volume-test-preprovisionedpv-cz2z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139506836s
Oct  6 23:54:08.510: INFO: Pod "exec-volume-test-preprovisionedpv-cz2z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.166326381s
STEP: Saw pod success
Oct  6 23:54:08.510: INFO: Pod "exec-volume-test-preprovisionedpv-cz2z" satisfied condition "Succeeded or Failed"
Oct  6 23:54:08.532: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod exec-volume-test-preprovisionedpv-cz2z container exec-container-preprovisionedpv-cz2z: <nil>
STEP: delete the pod
Oct  6 23:54:08.600: INFO: Waiting for pod exec-volume-test-preprovisionedpv-cz2z to disappear
Oct  6 23:54:08.623: INFO: Pod exec-volume-test-preprovisionedpv-cz2z no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-cz2z
Oct  6 23:54:08.623: INFO: Deleting pod "exec-volume-test-preprovisionedpv-cz2z" in namespace "volume-4166"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:09.294: INFO: Only supported for providers [azure] (not gce)
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  6 23:54:01.062: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7e6c5cf-a6eb-403a-958b-a656b686c0f1" in namespace "projected-6512" to be "Succeeded or Failed"
Oct  6 23:54:01.085: INFO: Pod "downwardapi-volume-e7e6c5cf-a6eb-403a-958b-a656b686c0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.985572ms
Oct  6 23:54:03.114: INFO: Pod "downwardapi-volume-e7e6c5cf-a6eb-403a-958b-a656b686c0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052032488s
Oct  6 23:54:05.138: INFO: Pod "downwardapi-volume-e7e6c5cf-a6eb-403a-958b-a656b686c0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076534268s
Oct  6 23:54:07.170: INFO: Pod "downwardapi-volume-e7e6c5cf-a6eb-403a-958b-a656b686c0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108553141s
Oct  6 23:54:09.197: INFO: Pod "downwardapi-volume-e7e6c5cf-a6eb-403a-958b-a656b686c0f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.135559374s
STEP: Saw pod success
Oct  6 23:54:09.197: INFO: Pod "downwardapi-volume-e7e6c5cf-a6eb-403a-958b-a656b686c0f1" satisfied condition "Succeeded or Failed"
Oct  6 23:54:09.221: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod downwardapi-volume-e7e6c5cf-a6eb-403a-958b-a656b686c0f1 container client-container: <nil>
STEP: delete the pod
Oct  6 23:54:09.316: INFO: Waiting for pod downwardapi-volume-e7e6c5cf-a6eb-403a-958b-a656b686c0f1 to disappear
Oct  6 23:54:09.342: INFO: Pod downwardapi-volume-e7e6c5cf-a6eb-403a-958b-a656b686c0f1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.578 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":136,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:09.419: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 44 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct  6 23:54:06.504: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  6 23:54:06.532: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-f42w
STEP: Creating a pod to test subpath
Oct  6 23:54:06.571: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-f42w" in namespace "provisioning-782" to be "Succeeded or Failed"
Oct  6 23:54:06.597: INFO: Pod "pod-subpath-test-inlinevolume-f42w": Phase="Pending", Reason="", readiness=false. Elapsed: 25.007979ms
Oct  6 23:54:08.620: INFO: Pod "pod-subpath-test-inlinevolume-f42w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048598563s
Oct  6 23:54:10.644: INFO: Pod "pod-subpath-test-inlinevolume-f42w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072876526s
STEP: Saw pod success
Oct  6 23:54:10.645: INFO: Pod "pod-subpath-test-inlinevolume-f42w" satisfied condition "Succeeded or Failed"
Oct  6 23:54:10.669: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-subpath-test-inlinevolume-f42w container test-container-volume-inlinevolume-f42w: <nil>
STEP: delete the pod
Oct  6 23:54:10.730: INFO: Waiting for pod pod-subpath-test-inlinevolume-f42w to disappear
Oct  6 23:54:10.754: INFO: Pod pod-subpath-test-inlinevolume-f42w no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-f42w
Oct  6 23:54:10.754: INFO: Deleting pod "pod-subpath-test-inlinevolume-f42w" in namespace "provisioning-782"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:10.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-782" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":39,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:54:10.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Oct  6 23:54:11.048: INFO: found topology map[topology.kubernetes.io/zone:us-west3-a]
Oct  6 23:54:11.048: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
Oct  6 23:54:11.048: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 298 lines ...
Oct  6 23:54:01.198: INFO: Pod gcepd-client still exists
Oct  6 23:54:03.200: INFO: Waiting for pod gcepd-client to disappear
Oct  6 23:54:03.224: INFO: Pod gcepd-client still exists
Oct  6 23:54:05.199: INFO: Waiting for pod gcepd-client to disappear
Oct  6 23:54:05.223: INFO: Pod gcepd-client no longer exists
STEP: cleaning the environment after gcepd
Oct  6 23:54:05.802: INFO: error deleting PD "e2e-f062f828-f9b1-4d70-96d3-209f0371414a": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-f062f828-f9b1-4d70-96d3-209f0371414a' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f', resourceInUseByAnotherResource
Oct  6 23:54:05.802: INFO: Couldn't delete PD "e2e-f062f828-f9b1-4d70-96d3-209f0371414a", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-f062f828-f9b1-4d70-96d3-209f0371414a' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-xm8f', resourceInUseByAnotherResource
Oct  6 23:54:12.650: INFO: Successfully deleted PD "e2e-f062f828-f9b1-4d70-96d3-209f0371414a".
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:12.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8317" for this suite.

... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext3)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should store data","total":-1,"completed":5,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:12.717: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 72 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 230 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:221
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":4,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:13.655: INFO: Only supported for providers [openstack] (not gce)
... skipping 134 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":16,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:14.423: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 211 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 29 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-37154e17-17d1-487d-b250-b40e08ed86b8
STEP: Creating a pod to test consume secrets
Oct  6 23:54:14.648: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f1e61df7-c423-4a5d-969f-ef5dae00ab84" in namespace "projected-9038" to be "Succeeded or Failed"
Oct  6 23:54:14.675: INFO: Pod "pod-projected-secrets-f1e61df7-c423-4a5d-969f-ef5dae00ab84": Phase="Pending", Reason="", readiness=false. Elapsed: 27.07744ms
Oct  6 23:54:16.701: INFO: Pod "pod-projected-secrets-f1e61df7-c423-4a5d-969f-ef5dae00ab84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052292672s
Oct  6 23:54:18.725: INFO: Pod "pod-projected-secrets-f1e61df7-c423-4a5d-969f-ef5dae00ab84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07665658s
STEP: Saw pod success
Oct  6 23:54:18.725: INFO: Pod "pod-projected-secrets-f1e61df7-c423-4a5d-969f-ef5dae00ab84" satisfied condition "Succeeded or Failed"
Oct  6 23:54:18.749: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-projected-secrets-f1e61df7-c423-4a5d-969f-ef5dae00ab84 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct  6 23:54:18.807: INFO: Waiting for pod pod-projected-secrets-f1e61df7-c423-4a5d-969f-ef5dae00ab84 to disappear
Oct  6 23:54:18.831: INFO: Pod pod-projected-secrets-f1e61df7-c423-4a5d-969f-ef5dae00ab84 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:18.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9038" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:18.905: INFO: Only supported for providers [aws] (not gce)
... skipping 125 lines ...
• [SLOW TEST:7.262 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:20.017: INFO: Driver local doesn't support ext3 -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 23 lines ...
Oct  6 23:53:21.212: INFO: PersistentVolumeClaim pvc-vcxxc found and phase=Bound (25.795324ms)
Oct  6 23:53:21.212: INFO: Waiting up to 3m0s for PersistentVolume nfs-z82sx to have phase Bound
Oct  6 23:53:21.236: INFO: PersistentVolume nfs-z82sx found and phase=Bound (24.570395ms)
[It] should test that a PV becomes Available and is clean after the PVC is deleted.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
STEP: Writing to the volume.
Oct  6 23:53:21.314: INFO: Waiting up to 5m0s for pod "pvc-tester-tzv2g" in namespace "pv-478" to be "Succeeded or Failed"
Oct  6 23:53:21.339: INFO: Pod "pvc-tester-tzv2g": Phase="Pending", Reason="", readiness=false. Elapsed: 24.824139ms
Oct  6 23:53:23.364: INFO: Pod "pvc-tester-tzv2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049761141s
Oct  6 23:53:25.390: INFO: Pod "pvc-tester-tzv2g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075021447s
Oct  6 23:53:27.414: INFO: Pod "pvc-tester-tzv2g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099906781s
Oct  6 23:53:29.440: INFO: Pod "pvc-tester-tzv2g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12505551s
Oct  6 23:53:31.472: INFO: Pod "pvc-tester-tzv2g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157202052s
Oct  6 23:53:33.497: INFO: Pod "pvc-tester-tzv2g": Phase="Pending", Reason="", readiness=false. Elapsed: 12.182694881s
Oct  6 23:53:35.525: INFO: Pod "pvc-tester-tzv2g": Phase="Pending", Reason="", readiness=false. Elapsed: 14.210120618s
Oct  6 23:53:37.552: INFO: Pod "pvc-tester-tzv2g": Phase="Pending", Reason="", readiness=false. Elapsed: 16.237202242s
Oct  6 23:53:39.576: INFO: Pod "pvc-tester-tzv2g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.26181667s
STEP: Saw pod success
Oct  6 23:53:39.576: INFO: Pod "pvc-tester-tzv2g" satisfied condition "Succeeded or Failed"
STEP: Deleting the claim
Oct  6 23:53:39.576: INFO: Deleting pod "pvc-tester-tzv2g" in namespace "pv-478"
Oct  6 23:53:39.607: INFO: Wait up to 5m0s for pod "pvc-tester-tzv2g" to be fully deleted
Oct  6 23:53:39.631: INFO: Deleting PVC pvc-vcxxc to trigger reclamation of PV 
Oct  6 23:53:39.631: INFO: Deleting PersistentVolumeClaim "pvc-vcxxc"
Oct  6 23:53:39.659: INFO: Waiting for reclaim process to complete.
... skipping 8 lines ...
Oct  6 23:53:51.877: INFO: PV nfs-z82sx now in "Available" phase
STEP: Re-mounting the volume.
Oct  6 23:53:51.910: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-97sms] to have phase Bound
Oct  6 23:53:51.939: INFO: PersistentVolumeClaim pvc-97sms found but phase is Pending instead of Bound.
Oct  6 23:53:53.966: INFO: PersistentVolumeClaim pvc-97sms found and phase=Bound (2.055647047s)
STEP: Verifying the mount has been cleaned.
Oct  6 23:53:53.997: INFO: Waiting up to 5m0s for pod "pvc-tester-6zv62" in namespace "pv-478" to be "Succeeded or Failed"
Oct  6 23:53:54.023: INFO: Pod "pvc-tester-6zv62": Phase="Pending", Reason="", readiness=false. Elapsed: 26.144852ms
Oct  6 23:53:56.049: INFO: Pod "pvc-tester-6zv62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052232248s
Oct  6 23:53:58.076: INFO: Pod "pvc-tester-6zv62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079541773s
Oct  6 23:54:00.101: INFO: Pod "pvc-tester-6zv62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104405812s
Oct  6 23:54:02.128: INFO: Pod "pvc-tester-6zv62": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130880855s
Oct  6 23:54:04.159: INFO: Pod "pvc-tester-6zv62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.161771469s
STEP: Saw pod success
Oct  6 23:54:04.159: INFO: Pod "pvc-tester-6zv62" satisfied condition "Succeeded or Failed"
Oct  6 23:54:04.159: INFO: Deleting pod "pvc-tester-6zv62" in namespace "pv-478"
Oct  6 23:54:04.190: INFO: Wait up to 5m0s for pod "pvc-tester-6zv62" to be fully deleted
Oct  6 23:54:04.215: INFO: Pod exited without failure; the volume has been recycled.
Oct  6 23:54:04.215: INFO: Removing second PVC, waiting for the recycler to finish before cleanup.
Oct  6 23:54:04.215: INFO: Deleting PVC pvc-97sms to trigger reclamation of PV 
Oct  6 23:54:04.215: INFO: Deleting PersistentVolumeClaim "pvc-97sms"
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    when invoking the Recycle reclaim policy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:265
      should test that a PV becomes Available and is clean after the PVC is deleted.
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":5,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:20.615: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:21.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4894" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:21.689: INFO: Only supported for providers [vsphere] (not gce)
... skipping 14 lines ...
      Only supported for providers [vsphere] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:54:18.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:22.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6715" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 141 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":2,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:22.932: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 14 lines ...
      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":7,"skipped":25,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:54:16.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 245 lines ...
• [SLOW TEST:21.077 seconds]
[sig-api-machinery] Servers with support for API chunking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":7,"skipped":60,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:25.318: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Oct  6 23:54:19.135: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-ff61df96-11f6-4b8b-ac0c-096a3972b2f9" in namespace "security-context-test-2148" to be "Succeeded or Failed"
Oct  6 23:54:19.177: INFO: Pod "alpine-nnp-true-ff61df96-11f6-4b8b-ac0c-096a3972b2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 41.980596ms
Oct  6 23:54:21.202: INFO: Pod "alpine-nnp-true-ff61df96-11f6-4b8b-ac0c-096a3972b2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067180302s
Oct  6 23:54:23.228: INFO: Pod "alpine-nnp-true-ff61df96-11f6-4b8b-ac0c-096a3972b2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092693392s
Oct  6 23:54:25.254: INFO: Pod "alpine-nnp-true-ff61df96-11f6-4b8b-ac0c-096a3972b2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119462275s
Oct  6 23:54:27.282: INFO: Pod "alpine-nnp-true-ff61df96-11f6-4b8b-ac0c-096a3972b2f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146683647s
Oct  6 23:54:27.282: INFO: Pod "alpine-nnp-true-ff61df96-11f6-4b8b-ac0c-096a3972b2f9" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:27.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2148" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  6 23:54:20.244: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4e5e98b-d873-4db1-8d5a-4b4f63ff8f4b" in namespace "downward-api-4053" to be "Succeeded or Failed"
Oct  6 23:54:20.269: INFO: Pod "downwardapi-volume-f4e5e98b-d873-4db1-8d5a-4b4f63ff8f4b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.765382ms
Oct  6 23:54:22.294: INFO: Pod "downwardapi-volume-f4e5e98b-d873-4db1-8d5a-4b4f63ff8f4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050080037s
Oct  6 23:54:24.323: INFO: Pod "downwardapi-volume-f4e5e98b-d873-4db1-8d5a-4b4f63ff8f4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078354693s
Oct  6 23:54:26.348: INFO: Pod "downwardapi-volume-f4e5e98b-d873-4db1-8d5a-4b4f63ff8f4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103467897s
Oct  6 23:54:28.372: INFO: Pod "downwardapi-volume-f4e5e98b-d873-4db1-8d5a-4b4f63ff8f4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.128070496s
STEP: Saw pod success
Oct  6 23:54:28.372: INFO: Pod "downwardapi-volume-f4e5e98b-d873-4db1-8d5a-4b4f63ff8f4b" satisfied condition "Succeeded or Failed"
Oct  6 23:54:28.396: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod downwardapi-volume-f4e5e98b-d873-4db1-8d5a-4b4f63ff8f4b container client-container: <nil>
STEP: delete the pod
Oct  6 23:54:28.464: INFO: Waiting for pod downwardapi-volume-f4e5e98b-d873-4db1-8d5a-4b4f63ff8f4b to disappear
Oct  6 23:54:28.497: INFO: Pod downwardapi-volume-f4e5e98b-d873-4db1-8d5a-4b4f63ff8f4b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.468 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:108.452 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:216
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:33.000: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 59 lines ...
• [SLOW TEST:10.820 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:33.790: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
Oct  6 23:54:26.610: INFO: PersistentVolumeClaim pvc-clnxf found but phase is Pending instead of Bound.
Oct  6 23:54:28.634: INFO: PersistentVolumeClaim pvc-clnxf found and phase=Bound (12.174339089s)
Oct  6 23:54:28.634: INFO: Waiting up to 3m0s for PersistentVolume local-5t5px to have phase Bound
Oct  6 23:54:28.657: INFO: PersistentVolume local-5t5px found and phase=Bound (23.096825ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-xsl7
STEP: Creating a pod to test exec-volume-test
Oct  6 23:54:28.731: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-xsl7" in namespace "volume-885" to be "Succeeded or Failed"
Oct  6 23:54:28.755: INFO: Pod "exec-volume-test-preprovisionedpv-xsl7": Phase="Pending", Reason="", readiness=false. Elapsed: 23.949787ms
Oct  6 23:54:30.781: INFO: Pod "exec-volume-test-preprovisionedpv-xsl7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050687796s
Oct  6 23:54:32.806: INFO: Pod "exec-volume-test-preprovisionedpv-xsl7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075185451s
STEP: Saw pod success
Oct  6 23:54:32.806: INFO: Pod "exec-volume-test-preprovisionedpv-xsl7" satisfied condition "Succeeded or Failed"
Oct  6 23:54:32.830: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod exec-volume-test-preprovisionedpv-xsl7 container exec-container-preprovisionedpv-xsl7: <nil>
STEP: delete the pod
Oct  6 23:54:32.891: INFO: Waiting for pod exec-volume-test-preprovisionedpv-xsl7 to disappear
Oct  6 23:54:32.917: INFO: Pod exec-volume-test-preprovisionedpv-xsl7 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-xsl7
Oct  6 23:54:32.918: INFO: Deleting pod "exec-volume-test-preprovisionedpv-xsl7" in namespace "volume-885"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:33.906: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 242 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":68,"failed":0}
[BeforeEach] [sig-node] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:54:35.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename node-lease-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:35.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-4095" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":7,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:35.441: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver windows-gcepd doesn't support  -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":5,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:53:55.675: INFO: >>> kubeConfig: /root/.kube/config
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:37.374: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-2f6d3d1f-bbe2-4a61-8089-9960f43fd192
STEP: Creating a pod to test consume secrets
Oct  6 23:54:33.324: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-43fc4e0e-3899-4c39-8f83-8686bf8b3541" in namespace "projected-4620" to be "Succeeded or Failed"
Oct  6 23:54:33.351: INFO: Pod "pod-projected-secrets-43fc4e0e-3899-4c39-8f83-8686bf8b3541": Phase="Pending", Reason="", readiness=false. Elapsed: 26.481902ms
Oct  6 23:54:35.377: INFO: Pod "pod-projected-secrets-43fc4e0e-3899-4c39-8f83-8686bf8b3541": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052313418s
Oct  6 23:54:37.403: INFO: Pod "pod-projected-secrets-43fc4e0e-3899-4c39-8f83-8686bf8b3541": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078214077s
STEP: Saw pod success
Oct  6 23:54:37.403: INFO: Pod "pod-projected-secrets-43fc4e0e-3899-4c39-8f83-8686bf8b3541" satisfied condition "Succeeded or Failed"
Oct  6 23:54:37.427: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-projected-secrets-43fc4e0e-3899-4c39-8f83-8686bf8b3541 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct  6 23:54:37.491: INFO: Waiting for pod pod-projected-secrets-43fc4e0e-3899-4c39-8f83-8686bf8b3541 to disappear
Oct  6 23:54:37.515: INFO: Pod pod-projected-secrets-43fc4e0e-3899-4c39-8f83-8686bf8b3541 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:37.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4620" for this suite.
STEP: Destroying namespace "secret-namespace-8458" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":6,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:77.757 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":57,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:54:33.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Oct  6 23:54:33.953: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-1361" to be "Succeeded or Failed"
Oct  6 23:54:33.975: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 22.232904ms
Oct  6 23:54:35.999: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046321683s
Oct  6 23:54:38.024: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071420619s
Oct  6 23:54:38.024: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:38.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1361" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":4,"skipped":23,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:38.148: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 87 lines ...
Oct  6 23:54:26.034: INFO: PersistentVolumeClaim pvc-5m6gj found but phase is Pending instead of Bound.
Oct  6 23:54:28.059: INFO: PersistentVolumeClaim pvc-5m6gj found and phase=Bound (12.174496948s)
Oct  6 23:54:28.059: INFO: Waiting up to 3m0s for PersistentVolume local-rpvk6 to have phase Bound
Oct  6 23:54:28.082: INFO: PersistentVolume local-rpvk6 found and phase=Bound (22.936556ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-d4x7
STEP: Creating a pod to test subpath
Oct  6 23:54:28.157: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-d4x7" in namespace "provisioning-1163" to be "Succeeded or Failed"
Oct  6 23:54:28.187: INFO: Pod "pod-subpath-test-preprovisionedpv-d4x7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.137995ms
Oct  6 23:54:30.215: INFO: Pod "pod-subpath-test-preprovisionedpv-d4x7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057537782s
Oct  6 23:54:32.243: INFO: Pod "pod-subpath-test-preprovisionedpv-d4x7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085878969s
Oct  6 23:54:34.267: INFO: Pod "pod-subpath-test-preprovisionedpv-d4x7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110020419s
Oct  6 23:54:36.308: INFO: Pod "pod-subpath-test-preprovisionedpv-d4x7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150547165s
Oct  6 23:54:38.336: INFO: Pod "pod-subpath-test-preprovisionedpv-d4x7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.179205661s
STEP: Saw pod success
Oct  6 23:54:38.336: INFO: Pod "pod-subpath-test-preprovisionedpv-d4x7" satisfied condition "Succeeded or Failed"
Oct  6 23:54:38.363: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod pod-subpath-test-preprovisionedpv-d4x7 container test-container-volume-preprovisionedpv-d4x7: <nil>
STEP: delete the pod
Oct  6 23:54:38.439: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-d4x7 to disappear
Oct  6 23:54:38.464: INFO: Pod pod-subpath-test-preprovisionedpv-d4x7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-d4x7
Oct  6 23:54:38.464: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-d4x7" in namespace "provisioning-1163"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:38.945: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 111 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
Oct  6 23:54:34.234: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-4289dd0b-e055-47a7-8061-fe4c835d999a" in namespace "security-context-test-3389" to be "Succeeded or Failed"
Oct  6 23:54:34.261: INFO: Pod "alpine-nnp-nil-4289dd0b-e055-47a7-8061-fe4c835d999a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.258159ms
Oct  6 23:54:36.308: INFO: Pod "alpine-nnp-nil-4289dd0b-e055-47a7-8061-fe4c835d999a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073237911s
Oct  6 23:54:38.335: INFO: Pod "alpine-nnp-nil-4289dd0b-e055-47a7-8061-fe4c835d999a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100176234s
Oct  6 23:54:40.360: INFO: Pod "alpine-nnp-nil-4289dd0b-e055-47a7-8061-fe4c835d999a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125140179s
Oct  6 23:54:40.360: INFO: Pod "alpine-nnp-nil-4289dd0b-e055-47a7-8061-fe4c835d999a" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:40.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3389" for this suite.


... skipping 13 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  6 23:54:38.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8d149d0-93c2-46a8-9649-e5b0f11115b4" in namespace "projected-9488" to be "Succeeded or Failed"
Oct  6 23:54:38.414: INFO: Pod "downwardapi-volume-b8d149d0-93c2-46a8-9649-e5b0f11115b4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.256323ms
Oct  6 23:54:40.451: INFO: Pod "downwardapi-volume-b8d149d0-93c2-46a8-9649-e5b0f11115b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06536178s
STEP: Saw pod success
Oct  6 23:54:40.451: INFO: Pod "downwardapi-volume-b8d149d0-93c2-46a8-9649-e5b0f11115b4" satisfied condition "Succeeded or Failed"
Oct  6 23:54:40.478: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod downwardapi-volume-b8d149d0-93c2-46a8-9649-e5b0f11115b4 container client-container: <nil>
STEP: delete the pod
Oct  6 23:54:40.548: INFO: Waiting for pod downwardapi-volume-b8d149d0-93c2-46a8-9649-e5b0f11115b4 to disappear
Oct  6 23:54:40.571: INFO: Pod downwardapi-volume-b8d149d0-93c2-46a8-9649-e5b0f11115b4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:40.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9488" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:40.642: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 105 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":39,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:42.653: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
• [SLOW TEST:5.175 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":7,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  6 23:54:39.224: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1080ba22-cf44-4991-813d-34f643aef701" in namespace "downward-api-8452" to be "Succeeded or Failed"
Oct  6 23:54:39.264: INFO: Pod "downwardapi-volume-1080ba22-cf44-4991-813d-34f643aef701": Phase="Pending", Reason="", readiness=false. Elapsed: 39.096859ms
Oct  6 23:54:41.287: INFO: Pod "downwardapi-volume-1080ba22-cf44-4991-813d-34f643aef701": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062824952s
Oct  6 23:54:43.363: INFO: Pod "downwardapi-volume-1080ba22-cf44-4991-813d-34f643aef701": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138552219s
STEP: Saw pod success
Oct  6 23:54:43.363: INFO: Pod "downwardapi-volume-1080ba22-cf44-4991-813d-34f643aef701" satisfied condition "Succeeded or Failed"
Oct  6 23:54:43.413: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod downwardapi-volume-1080ba22-cf44-4991-813d-34f643aef701 container client-container: <nil>
STEP: delete the pod
Oct  6 23:54:43.497: INFO: Waiting for pod downwardapi-volume-1080ba22-cf44-4991-813d-34f643aef701 to disappear
Oct  6 23:54:43.523: INFO: Pod downwardapi-volume-1080ba22-cf44-4991-813d-34f643aef701 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:43.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8452" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:43.626: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 119 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:43.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-274" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":9,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:44.197: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
Oct  6 23:53:55.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Oct  6 23:53:55.366: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  6 23:53:55.420: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-476" in namespace "provisioning-476" to be "Succeeded or Failed"
Oct  6 23:53:55.452: INFO: Pod "hostpath-symlink-prep-provisioning-476": Phase="Pending", Reason="", readiness=false. Elapsed: 32.586485ms
Oct  6 23:53:57.478: INFO: Pod "hostpath-symlink-prep-provisioning-476": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05869263s
Oct  6 23:53:59.503: INFO: Pod "hostpath-symlink-prep-provisioning-476": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083442353s
Oct  6 23:54:01.530: INFO: Pod "hostpath-symlink-prep-provisioning-476": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110518948s
Oct  6 23:54:03.558: INFO: Pod "hostpath-symlink-prep-provisioning-476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.138744073s
STEP: Saw pod success
Oct  6 23:54:03.558: INFO: Pod "hostpath-symlink-prep-provisioning-476" satisfied condition "Succeeded or Failed"
Oct  6 23:54:03.558: INFO: Deleting pod "hostpath-symlink-prep-provisioning-476" in namespace "provisioning-476"
Oct  6 23:54:03.598: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-476" to be fully deleted
Oct  6 23:54:03.630: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-xjlf
STEP: Creating a pod to test atomic-volume-subpath
Oct  6 23:54:03.678: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-xjlf" in namespace "provisioning-476" to be "Succeeded or Failed"
Oct  6 23:54:03.706: INFO: Pod "pod-subpath-test-inlinevolume-xjlf": Phase="Pending", Reason="", readiness=false. Elapsed: 28.465084ms
Oct  6 23:54:05.732: INFO: Pod "pod-subpath-test-inlinevolume-xjlf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054723443s
Oct  6 23:54:07.767: INFO: Pod "pod-subpath-test-inlinevolume-xjlf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089382181s
Oct  6 23:54:09.794: INFO: Pod "pod-subpath-test-inlinevolume-xjlf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115933672s
Oct  6 23:54:11.819: INFO: Pod "pod-subpath-test-inlinevolume-xjlf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.141383625s
Oct  6 23:54:13.845: INFO: Pod "pod-subpath-test-inlinevolume-xjlf": Phase="Running", Reason="", readiness=true. Elapsed: 10.167054911s
... skipping 5 lines ...
Oct  6 23:54:26.011: INFO: Pod "pod-subpath-test-inlinevolume-xjlf": Phase="Running", Reason="", readiness=true. Elapsed: 22.333288595s
Oct  6 23:54:28.042: INFO: Pod "pod-subpath-test-inlinevolume-xjlf": Phase="Running", Reason="", readiness=true. Elapsed: 24.363931105s
Oct  6 23:54:30.066: INFO: Pod "pod-subpath-test-inlinevolume-xjlf": Phase="Running", Reason="", readiness=true. Elapsed: 26.387979394s
Oct  6 23:54:32.091: INFO: Pod "pod-subpath-test-inlinevolume-xjlf": Phase="Running", Reason="", readiness=true. Elapsed: 28.413404939s
Oct  6 23:54:34.117: INFO: Pod "pod-subpath-test-inlinevolume-xjlf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.438806318s
STEP: Saw pod success
Oct  6 23:54:34.117: INFO: Pod "pod-subpath-test-inlinevolume-xjlf" satisfied condition "Succeeded or Failed"
Oct  6 23:54:34.142: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod pod-subpath-test-inlinevolume-xjlf container test-container-subpath-inlinevolume-xjlf: <nil>
STEP: delete the pod
Oct  6 23:54:34.203: INFO: Waiting for pod pod-subpath-test-inlinevolume-xjlf to disappear
Oct  6 23:54:34.226: INFO: Pod pod-subpath-test-inlinevolume-xjlf no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-xjlf
Oct  6 23:54:34.226: INFO: Deleting pod "pod-subpath-test-inlinevolume-xjlf" in namespace "provisioning-476"
STEP: Deleting pod
Oct  6 23:54:34.250: INFO: Deleting pod "pod-subpath-test-inlinevolume-xjlf" in namespace "provisioning-476"
Oct  6 23:54:34.303: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-476" in namespace "provisioning-476" to be "Succeeded or Failed"
Oct  6 23:54:34.327: INFO: Pod "hostpath-symlink-prep-provisioning-476": Phase="Pending", Reason="", readiness=false. Elapsed: 23.917947ms
Oct  6 23:54:36.360: INFO: Pod "hostpath-symlink-prep-provisioning-476": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056159729s
Oct  6 23:54:38.388: INFO: Pod "hostpath-symlink-prep-provisioning-476": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084110494s
Oct  6 23:54:40.414: INFO: Pod "hostpath-symlink-prep-provisioning-476": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110697337s
Oct  6 23:54:42.481: INFO: Pod "hostpath-symlink-prep-provisioning-476": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177353332s
Oct  6 23:54:44.556: INFO: Pod "hostpath-symlink-prep-provisioning-476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.252666068s
STEP: Saw pod success
Oct  6 23:54:44.556: INFO: Pod "hostpath-symlink-prep-provisioning-476" satisfied condition "Succeeded or Failed"
Oct  6 23:54:44.556: INFO: Deleting pod "hostpath-symlink-prep-provisioning-476" in namespace "provisioning-476"
Oct  6 23:54:44.672: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-476" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:44.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-476" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":14,"skipped":130,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:44.948: INFO: Only supported for providers [vsphere] (not gce)
... skipping 35 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:46.322: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 46 lines ...
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  6 23:54:16.073: INFO: File wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local from pod  dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 23:54:16.103: INFO: File jessie_udp@dns-test-service-3.dns-991.svc.cluster.local from pod  dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f contains '' instead of 'bar.example.com.'
Oct  6 23:54:16.103: INFO: Lookups using dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f failed for: [wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local jessie_udp@dns-test-service-3.dns-991.svc.cluster.local]

Oct  6 23:54:21.130: INFO: File wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local from pod  dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 23:54:21.155: INFO: File jessie_udp@dns-test-service-3.dns-991.svc.cluster.local from pod  dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 23:54:21.155: INFO: Lookups using dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f failed for: [wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local jessie_udp@dns-test-service-3.dns-991.svc.cluster.local]

Oct  6 23:54:26.130: INFO: File wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local from pod  dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 23:54:26.156: INFO: File jessie_udp@dns-test-service-3.dns-991.svc.cluster.local from pod  dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 23:54:26.156: INFO: Lookups using dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f failed for: [wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local jessie_udp@dns-test-service-3.dns-991.svc.cluster.local]

Oct  6 23:54:31.130: INFO: File wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local from pod  dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 23:54:31.154: INFO: File jessie_udp@dns-test-service-3.dns-991.svc.cluster.local from pod  dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 23:54:31.154: INFO: Lookups using dns-991/dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f failed for: [wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local jessie_udp@dns-test-service-3.dns-991.svc.cluster.local]

Oct  6 23:54:36.154: INFO: DNS probes using dns-test-3b9355be-9b67-4bdc-975b-b1233784bd5f succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-991.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local; sleep 1; done
... skipping 17 lines ...
• [SLOW TEST:51.125 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":5,"skipped":60,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:50.693: INFO: Only supported for providers [aws] (not gce)
... skipping 150 lines ...
• [SLOW TEST:25.475 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":8,"skipped":73,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:50.853: INFO: Only supported for providers [azure] (not gce)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Oct  6 23:54:43.037: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5808" to be "Succeeded or Failed"
Oct  6 23:54:43.093: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 55.514316ms
Oct  6 23:54:45.190: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152873684s
Oct  6 23:54:47.237: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199775786s
Oct  6 23:54:49.261: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224213978s
Oct  6 23:54:51.289: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.251223615s
STEP: Saw pod success
Oct  6 23:54:51.289: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct  6 23:54:51.326: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Oct  6 23:54:51.408: INFO: Waiting for pod pod-host-path-test to disappear
Oct  6 23:54:51.433: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.676 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":8,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:51.506: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":15,"skipped":141,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 68 lines ...
Oct  6 23:54:48.029: INFO: Waiting for pod gcepd-client to disappear
Oct  6 23:54:48.099: INFO: Pod gcepd-client no longer exists
STEP: cleaning the environment after gcepd
STEP: Deleting pv and pvc
Oct  6 23:54:48.099: INFO: Deleting PersistentVolumeClaim "pvc-ms8pn"
Oct  6 23:54:48.133: INFO: Deleting PersistentVolume "gcepd-6txjm"
Oct  6 23:54:48.750: INFO: error deleting PD "e2e-b5f270e0-b5de-4384-b1ae-de76e016966b": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-b5f270e0-b5de-4384-b1ae-de76e016966b' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh', resourceInUseByAnotherResource
Oct  6 23:54:48.750: INFO: Couldn't delete PD "e2e-b5f270e0-b5de-4384-b1ae-de76e016966b", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-b5f270e0-b5de-4384-b1ae-de76e016966b' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh', resourceInUseByAnotherResource
Oct  6 23:54:55.543: INFO: Successfully deleted PD "e2e-b5f270e0-b5de-4384-b1ae-de76e016966b".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:55.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8001" for this suite.

... skipping 36 lines ...
Oct  6 23:54:41.491: INFO: PersistentVolumeClaim pvc-s69v9 found but phase is Pending instead of Bound.
Oct  6 23:54:43.521: INFO: PersistentVolumeClaim pvc-s69v9 found and phase=Bound (10.180338191s)
Oct  6 23:54:43.522: INFO: Waiting up to 3m0s for PersistentVolume local-wjxxn to have phase Bound
Oct  6 23:54:43.555: INFO: PersistentVolume local-wjxxn found and phase=Bound (33.922422ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nt7b
STEP: Creating a pod to test subpath
Oct  6 23:54:43.643: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nt7b" in namespace "provisioning-8804" to be "Succeeded or Failed"
Oct  6 23:54:43.684: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.781253ms
Oct  6 23:54:45.756: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112929869s
Oct  6 23:54:47.811: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168028583s
Oct  6 23:54:49.838: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194489771s
Oct  6 23:54:51.866: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222354706s
Oct  6 23:54:53.892: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.248515994s
Oct  6 23:54:55.969: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.32577683s
STEP: Saw pod success
Oct  6 23:54:55.969: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7b" satisfied condition "Succeeded or Failed"
Oct  6 23:54:56.016: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-subpath-test-preprovisionedpv-nt7b container test-container-volume-preprovisionedpv-nt7b: <nil>
STEP: delete the pod
Oct  6 23:54:56.164: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nt7b to disappear
Oct  6 23:54:56.196: INFO: Pod pod-subpath-test-preprovisionedpv-nt7b no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nt7b
Oct  6 23:54:56.196: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nt7b" in namespace "provisioning-8804"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":44,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:57.347: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 95 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
... skipping 137 lines ...
• [SLOW TEST:15.406 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Server request timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:54:58.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-6089" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":9,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:58.477: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 54 lines ...
• [SLOW TEST:15.556 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":10,"skipped":80,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:54:59.854: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 138 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":59,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:06.935: INFO: Only supported for providers [azure] (not gce)
... skipping 14 lines ...
      Only supported for providers [azure] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":64,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:54:40.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 52 lines ...
• [SLOW TEST:26.969 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":7,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:07.463: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
Oct  6 23:54:55.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct  6 23:55:08.383: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":142,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:08.506: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
Oct  6 23:54:27.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct  6 23:54:27.601: INFO: created pod
Oct  6 23:54:27.601: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-290" to be "Succeeded or Failed"
Oct  6 23:54:27.626: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 25.394788ms
Oct  6 23:54:29.652: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051362416s
Oct  6 23:54:31.677: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076696896s
Oct  6 23:54:33.704: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103495331s
Oct  6 23:54:35.729: INFO: Pod "oidc-discovery-validator": Phase="Failed", Reason="", readiness=false. Elapsed: 8.128834918s
Oct  6 23:55:05.730: INFO: polling logs
Oct  6 23:55:05.761: INFO: Pod logs: 
2021/10/06 23:54:31 OK: Got token
2021/10/06 23:54:31 validating with in-cluster discovery
2021/10/06 23:54:31 OK: got issuer https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local
2021/10/06 23:54:31 Full, not-validated claims: 
openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local", Subject:"system:serviceaccount:svcaccounts-290:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1633565068, NotBefore:1633564468, IssuedAt:1633564468, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-290", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"0e9d91af-bab9-4baf-8fdb-3983cc036416"}}}
2021/10/06 23:54:31 failed to validate with in-cluster discovery: Get "https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/.well-known/openid-configuration": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 100.64.0.10:53: no such host
2021/10/06 23:54:31 falling back to validating with external discovery
2021/10/06 23:54:31 OK: got issuer https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local
2021/10/06 23:54:31 Full, not-validated claims: 
openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local", Subject:"system:serviceaccount:svcaccounts-290:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1633565068, NotBefore:1633564468, IssuedAt:1633564468, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-290", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"0e9d91af-bab9-4baf-8fdb-3983cc036416"}}}
2021/10/06 23:54:31 Get "https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/.well-known/openid-configuration": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 100.64.0.10:53: no such host

Oct  6 23:55:05.762: FAIL: Unexpected error:
    <*errors.errorString | 0xc002e726e0>: {
        s: "pod \"oidc-discovery-validator\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.180.0.3 PodIP:100.96.2.83 PodIPs:[{IP:100.96.2.83}] StartTime:2021-10-06 23:54:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-10-06 23:54:31 +0000 UTC,FinishedAt:2021-10-06 23:54:31 +0000 UTC,ContainerID:docker://ad5ead795310d5a27bb5898783ee69de7e6df7780d2ad4d44d7e74fc142c7d98,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://ad5ead795310d5a27bb5898783ee69de7e6df7780d2ad4d44d7e74fc142c7d98 Started:0xc001280e6f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    pod "oidc-discovery-validator" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.180.0.3 PodIP:100.96.2.83 PodIPs:[{IP:100.96.2.83}] StartTime:2021-10-06 23:54:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-10-06 23:54:31 +0000 UTC,FinishedAt:2021-10-06 23:54:31 +0000 UTC,ContainerID:docker://ad5ead795310d5a27bb5898783ee69de7e6df7780d2ad4d44d7e74fc142c7d98,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://ad5ead795310d5a27bb5898783ee69de7e6df7780d2ad4d44d7e74fc142c7d98 Started:0xc001280e6f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/auth.glob..func6.7()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:789 +0xc45
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0009aa180)
... skipping 9 lines ...
STEP: Collecting events from namespace "svcaccounts-290".
STEP: Found 5 events.
Oct  6 23:55:05.817: INFO: At 2021-10-06 23:54:27 +0000 UTC - event for oidc-discovery-validator: {default-scheduler } Scheduled: Successfully assigned svcaccounts-290/oidc-discovery-validator to nodes-us-west3-a-87xh
Oct  6 23:55:05.817: INFO: At 2021-10-06 23:54:29 +0000 UTC - event for oidc-discovery-validator: {kubelet nodes-us-west3-a-87xh} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct  6 23:55:05.817: INFO: At 2021-10-06 23:54:29 +0000 UTC - event for oidc-discovery-validator: {kubelet nodes-us-west3-a-87xh} Created: Created container oidc-discovery-validator
Oct  6 23:55:05.817: INFO: At 2021-10-06 23:54:31 +0000 UTC - event for oidc-discovery-validator: {kubelet nodes-us-west3-a-87xh} Started: Started container oidc-discovery-validator
Oct  6 23:55:05.817: INFO: At 2021-10-06 23:54:35 +0000 UTC - event for oidc-discovery-validator: {kubelet nodes-us-west3-a-87xh} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-fts7l" : object "svcaccounts-290"/"kube-root-ca.crt" not registered
Oct  6 23:55:05.844: INFO: POD                       NODE                   PHASE   GRACE  CONDITIONS
Oct  6 23:55:05.844: INFO: oidc-discovery-validator  nodes-us-west3-a-87xh  Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-06 23:54:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-06 23:54:27 +0000 UTC ContainersNotReady containers with unready status: [oidc-discovery-validator]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-06 23:54:27 +0000 UTC ContainersNotReady containers with unready status: [oidc-discovery-validator]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-06 23:54:27 +0000 UTC  }]
Oct  6 23:55:05.844: INFO: 
Oct  6 23:55:05.878: INFO: 
Logging node info for node master-us-west3-a-8lvv
Oct  6 23:55:05.908: INFO: Node Info: &Node{ObjectMeta:{master-us-west3-a-8lvv    12837174-20ce-4437-9685-80925b4dfbe0 1906 0 2021-10-06 23:45:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:e2-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west3 failure-domain.beta.kubernetes.io/zone:us-west3-a kops.k8s.io/instancegroup:master-us-west3-a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:master-us-west3-a-8lvv kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:e2-standard-2 topology.kubernetes.io/region:us-west3 topology.kubernetes.io/zone:us-west3-a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-10-06 23:45:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2021-10-06 23:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2021-10-06 23:46:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2021-10-06 23:46:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kops-controller Update v1 2021-10-06 23:46:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cloud.google.com/metadata-proxy-ready":{},"f:kops.k8s.io/instancegroup":{}}}} } {kubelet Update v1 2021-10-06 23:46:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west3-a/master-us-west3-a-8lvv,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49767120896 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8340893696 0} {<nil>} 8145404Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44790408733 0} {<nil>} 44790408733 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8236036096 0} {<nil>} 8043004Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-06 23:46:14 +0000 UTC,LastTransitionTime:2021-10-06 23:46:14 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-06 23:51:25 +0000 UTC,LastTransitionTime:2021-10-06 23:45:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-06 23:51:25 +0000 UTC,LastTransitionTime:2021-10-06 23:45:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-06 23:51:25 +0000 UTC,LastTransitionTime:2021-10-06 23:45:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-06 23:51:25 +0000 UTC,LastTransitionTime:2021-10-06 23:46:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.180.0.2,},NodeAddress{Type:ExternalIP,Address:34.106.238.209,},NodeAddress{Type:InternalDNS,Address:master-us-west3-a-8lvv.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:master-us-west3-a-8lvv,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:761cf7e08ed0d6bd00d8e5d520004938,SystemUUID:761cf7e0-8ed0-d6bd-00d8-e5d520004938,BootID:3ffc4e96-2aee-40b4-9292-65dc7fc5b37e,KernelVersion:5.11.0-1018-gcp,OSImage:Ubuntu 20.04.3 LTS,ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.22.2,KubeProxyVersion:v1.22.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:507676854,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.22.2],SizeBytes:128450973,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.22.2],SizeBytes:121979580,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.23.0-alpha.1],SizeBytes:113187431,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.23.0-alpha.1],SizeBytes:113041505,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.2],SizeBytes:103645372,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.22.2],SizeBytes:52658863,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.23.0-alpha.1],SizeBytes:26201864,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/google_containers/k8s-custom-iptables@sha256:8b1a0831e88973e2937eae3458edb470f20d54bf80d88b6a3355f36266e16ca5 gcr.io/google_containers/k8s-custom-iptables:1.0],SizeBytes:6528911,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct  6 23:55:05.908: INFO: 
Logging kubelet events for node master-us-west3-a-8lvv
... skipping 179 lines ...
• Failure [42.591 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  6 23:55:05.762: Unexpected error:
      <*errors.errorString | 0xc002e726e0>: {
          s: "pod \"oidc-discovery-validator\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.180.0.3 PodIP:100.96.2.83 PodIPs:[{IP:100.96.2.83}] StartTime:2021-10-06 23:54:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-10-06 23:54:31 +0000 UTC,FinishedAt:2021-10-06 23:54:31 +0000 UTC,ContainerID:docker://ad5ead795310d5a27bb5898783ee69de7e6df7780d2ad4d44d7e74fc142c7d98,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://ad5ead795310d5a27bb5898783ee69de7e6df7780d2ad4d44d7e74fc142c7d98 Started:0xc001280e6f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
      }
      pod "oidc-discovery-validator" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-06 23:54:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.180.0.3 PodIP:100.96.2.83 PodIPs:[{IP:100.96.2.83}] StartTime:2021-10-06 23:54:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-10-06 23:54:31 +0000 UTC,FinishedAt:2021-10-06 23:54:31 +0000 UTC,ContainerID:docker://ad5ead795310d5a27bb5898783ee69de7e6df7780d2ad4d44d7e74fc142c7d98,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://ad5ead795310d5a27bb5898783ee69de7e6df7780d2ad4d44d7e74fc142c7d98 Started:0xc001280e6f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:789
------------------------------
{"msg":"FAILED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":8,"skipped":35,"failed":1,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:10.009: INFO: Only supported for providers [vsphere] (not gce)
... skipping 14 lines ...
      Only supported for providers [vsphere] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":6,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:54:55.607: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Oct  6 23:54:55.961: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  6 23:54:55.961: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-pqfc
STEP: Creating a pod to test subpath
Oct  6 23:54:56.027: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-pqfc" in namespace "provisioning-9316" to be "Succeeded or Failed"
Oct  6 23:54:56.080: INFO: Pod "pod-subpath-test-inlinevolume-pqfc": Phase="Pending", Reason="", readiness=false. Elapsed: 52.805755ms
Oct  6 23:54:58.107: INFO: Pod "pod-subpath-test-inlinevolume-pqfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0794253s
Oct  6 23:55:00.138: INFO: Pod "pod-subpath-test-inlinevolume-pqfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110523675s
Oct  6 23:55:02.164: INFO: Pod "pod-subpath-test-inlinevolume-pqfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136316182s
Oct  6 23:55:04.190: INFO: Pod "pod-subpath-test-inlinevolume-pqfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162926966s
Oct  6 23:55:06.218: INFO: Pod "pod-subpath-test-inlinevolume-pqfc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.190742783s
Oct  6 23:55:08.245: INFO: Pod "pod-subpath-test-inlinevolume-pqfc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.217476009s
Oct  6 23:55:10.277: INFO: Pod "pod-subpath-test-inlinevolume-pqfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.249368054s
STEP: Saw pod success
Oct  6 23:55:10.277: INFO: Pod "pod-subpath-test-inlinevolume-pqfc" satisfied condition "Succeeded or Failed"
Oct  6 23:55:10.314: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-subpath-test-inlinevolume-pqfc container test-container-subpath-inlinevolume-pqfc: <nil>
STEP: delete the pod
Oct  6 23:55:10.384: INFO: Waiting for pod pod-subpath-test-inlinevolume-pqfc to disappear
Oct  6 23:55:10.408: INFO: Pod pod-subpath-test-inlinevolume-pqfc no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-pqfc
Oct  6 23:55:10.408: INFO: Deleting pod "pod-subpath-test-inlinevolume-pqfc" in namespace "provisioning-9316"
... skipping 51 lines ...
Oct  6 23:54:37.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Oct  6 23:54:38.109: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  6 23:54:38.168: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7112" in namespace "provisioning-7112" to be "Succeeded or Failed"
Oct  6 23:54:38.192: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Pending", Reason="", readiness=false. Elapsed: 23.763904ms
Oct  6 23:54:40.217: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048508903s
Oct  6 23:54:42.306: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137854673s
Oct  6 23:54:44.425: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25678231s
Oct  6 23:54:46.467: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.298796395s
STEP: Saw pod success
Oct  6 23:54:46.467: INFO: Pod "hostpath-symlink-prep-provisioning-7112" satisfied condition "Succeeded or Failed"
Oct  6 23:54:46.467: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7112" in namespace "provisioning-7112"
Oct  6 23:54:46.519: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7112" to be fully deleted
Oct  6 23:54:46.544: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-8qkm
STEP: Creating a pod to test subpath
Oct  6 23:54:46.579: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8qkm" in namespace "provisioning-7112" to be "Succeeded or Failed"
Oct  6 23:54:46.608: INFO: Pod "pod-subpath-test-inlinevolume-8qkm": Phase="Pending", Reason="", readiness=false. Elapsed: 28.278346ms
Oct  6 23:54:48.639: INFO: Pod "pod-subpath-test-inlinevolume-8qkm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05935116s
Oct  6 23:54:50.665: INFO: Pod "pod-subpath-test-inlinevolume-8qkm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085854059s
Oct  6 23:54:52.692: INFO: Pod "pod-subpath-test-inlinevolume-8qkm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112542979s
Oct  6 23:54:54.716: INFO: Pod "pod-subpath-test-inlinevolume-8qkm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136443391s
Oct  6 23:54:56.749: INFO: Pod "pod-subpath-test-inlinevolume-8qkm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.169945866s
Oct  6 23:54:58.776: INFO: Pod "pod-subpath-test-inlinevolume-8qkm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.196568554s
STEP: Saw pod success
Oct  6 23:54:58.776: INFO: Pod "pod-subpath-test-inlinevolume-8qkm" satisfied condition "Succeeded or Failed"
Oct  6 23:54:58.801: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod pod-subpath-test-inlinevolume-8qkm container test-container-subpath-inlinevolume-8qkm: <nil>
STEP: delete the pod
Oct  6 23:54:58.863: INFO: Waiting for pod pod-subpath-test-inlinevolume-8qkm to disappear
Oct  6 23:54:58.886: INFO: Pod pod-subpath-test-inlinevolume-8qkm no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-8qkm
Oct  6 23:54:58.886: INFO: Deleting pod "pod-subpath-test-inlinevolume-8qkm" in namespace "provisioning-7112"
STEP: Deleting pod
Oct  6 23:54:58.910: INFO: Deleting pod "pod-subpath-test-inlinevolume-8qkm" in namespace "provisioning-7112"
Oct  6 23:54:58.963: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7112" in namespace "provisioning-7112" to be "Succeeded or Failed"
Oct  6 23:54:58.987: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Pending", Reason="", readiness=false. Elapsed: 23.937216ms
Oct  6 23:55:01.012: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048157207s
Oct  6 23:55:03.036: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072648587s
Oct  6 23:55:05.079: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115272345s
Oct  6 23:55:07.109: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145346502s
Oct  6 23:55:09.134: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Pending", Reason="", readiness=false. Elapsed: 10.170503452s
Oct  6 23:55:11.159: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Pending", Reason="", readiness=false. Elapsed: 12.195225307s
Oct  6 23:55:13.185: INFO: Pod "hostpath-symlink-prep-provisioning-7112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.221985566s
STEP: Saw pod success
Oct  6 23:55:13.185: INFO: Pod "hostpath-symlink-prep-provisioning-7112" satisfied condition "Succeeded or Failed"
Oct  6 23:55:13.185: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7112" in namespace "provisioning-7112"
Oct  6 23:55:13.220: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7112" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:55:13.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7112" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":68,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":59,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:13.327: INFO: Only supported for providers [openstack] (not gce)
... skipping 212 lines ...
STEP: Creating a kubernetes client
Oct  6 23:54:21.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Oct  6 23:54:21.733: INFO: PodSpec: initContainers in spec.initContainers
Oct  6 23:55:18.065: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fd516aed-0ebe-4cfb-8b33-d0b5b85470e8", GenerateName:"", Namespace:"init-container-5284", SelfLink:"", UID:"29b18483-7983-47ce-a19b-129ad276cdc3", ResourceVersion:"11565", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769161261, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"733727145"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003fa9ae8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003fa9b00), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003fa9b18), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003fa9b30), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-6sk5w", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002b6d840), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-6sk5w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-6sk5w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-6sk5w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003ca4b88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"nodes-us-west3-a-v32d", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000aa93b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003ca4c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003ca4c20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003ca4c28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003ca4c2c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003a9f460), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161261, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161261, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161261, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769161261, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.180.0.4", PodIP:"100.96.1.59", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.1.59"}}, StartTime:(*v1.Time)(0xc003fa9b60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000aa9570)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000aa95e0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://8be114dccde7dc5db838ee7e1921ec565ccf36c5556527224a82062ad31f0fe3", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b6d900), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b6d8e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc003ca4caf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:55:18.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5284" for this suite.


• [SLOW TEST:56.535 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":7,"skipped":39,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:18.191: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
STEP: Destroying namespace "services-1016" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":8,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:18.923: INFO: Driver emptydir doesn't support ext4 -- skipping
... skipping 61 lines ...
Oct  6 23:55:10.562: INFO: PersistentVolumeClaim pvc-xgw4l found but phase is Pending instead of Bound.
Oct  6 23:55:12.602: INFO: PersistentVolumeClaim pvc-xgw4l found and phase=Bound (2.063667063s)
Oct  6 23:55:12.602: INFO: Waiting up to 3m0s for PersistentVolume local-4br8h to have phase Bound
Oct  6 23:55:12.627: INFO: PersistentVolume local-4br8h found and phase=Bound (24.71874ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jqdm
STEP: Creating a pod to test subpath
Oct  6 23:55:12.745: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jqdm" in namespace "provisioning-6033" to be "Succeeded or Failed"
Oct  6 23:55:12.783: INFO: Pod "pod-subpath-test-preprovisionedpv-jqdm": Phase="Pending", Reason="", readiness=false. Elapsed: 38.713314ms
Oct  6 23:55:14.823: INFO: Pod "pod-subpath-test-preprovisionedpv-jqdm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078538941s
Oct  6 23:55:16.851: INFO: Pod "pod-subpath-test-preprovisionedpv-jqdm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106330106s
Oct  6 23:55:18.898: INFO: Pod "pod-subpath-test-preprovisionedpv-jqdm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153526047s
STEP: Saw pod success
Oct  6 23:55:18.898: INFO: Pod "pod-subpath-test-preprovisionedpv-jqdm" satisfied condition "Succeeded or Failed"
Oct  6 23:55:18.927: INFO: Trying to get logs from node nodes-us-west3-a-vcbk pod pod-subpath-test-preprovisionedpv-jqdm container test-container-subpath-preprovisionedpv-jqdm: <nil>
STEP: delete the pod
Oct  6 23:55:19.083: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jqdm to disappear
Oct  6 23:55:19.111: INFO: Pod pod-subpath-test-preprovisionedpv-jqdm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jqdm
Oct  6 23:55:19.111: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jqdm" in namespace "provisioning-6033"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":11,"skipped":93,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:19.711: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 112 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:55:20.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8738" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":12,"skipped":99,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:10.821 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":38,"failed":1,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":7,"skipped":57,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:55:10.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  6 23:55:10.697: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9caf5db0-afa9-42fa-ab10-43f63a278684" in namespace "downward-api-1095" to be "Succeeded or Failed"
Oct  6 23:55:10.722: INFO: Pod "downwardapi-volume-9caf5db0-afa9-42fa-ab10-43f63a278684": Phase="Pending", Reason="", readiness=false. Elapsed: 24.867582ms
Oct  6 23:55:12.774: INFO: Pod "downwardapi-volume-9caf5db0-afa9-42fa-ab10-43f63a278684": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076311435s
Oct  6 23:55:14.824: INFO: Pod "downwardapi-volume-9caf5db0-afa9-42fa-ab10-43f63a278684": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126913005s
Oct  6 23:55:16.850: INFO: Pod "downwardapi-volume-9caf5db0-afa9-42fa-ab10-43f63a278684": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15289764s
Oct  6 23:55:18.899: INFO: Pod "downwardapi-volume-9caf5db0-afa9-42fa-ab10-43f63a278684": Phase="Pending", Reason="", readiness=false. Elapsed: 8.201956564s
Oct  6 23:55:21.032: INFO: Pod "downwardapi-volume-9caf5db0-afa9-42fa-ab10-43f63a278684": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.334896732s
STEP: Saw pod success
Oct  6 23:55:21.032: INFO: Pod "downwardapi-volume-9caf5db0-afa9-42fa-ab10-43f63a278684" satisfied condition "Succeeded or Failed"
Oct  6 23:55:21.130: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod downwardapi-volume-9caf5db0-afa9-42fa-ab10-43f63a278684 container client-container: <nil>
STEP: delete the pod
Oct  6 23:55:21.250: INFO: Waiting for pod downwardapi-volume-9caf5db0-afa9-42fa-ab10-43f63a278684 to disappear
Oct  6 23:55:21.276: INFO: Pod downwardapi-volume-9caf5db0-afa9-42fa-ab10-43f63a278684 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 13 lines ...
Oct  6 23:54:46.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
Oct  6 23:54:46.569: INFO: Waiting up to 5m0s for pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f" in namespace "svcaccounts-180" to be "Succeeded or Failed"
Oct  6 23:54:46.602: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 32.455494ms
Oct  6 23:54:48.626: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056566317s
Oct  6 23:54:50.659: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089905357s
Oct  6 23:54:52.684: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114836074s
Oct  6 23:54:54.708: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.138471912s
STEP: Saw pod success
Oct  6 23:54:54.708: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f" satisfied condition "Succeeded or Failed"
Oct  6 23:54:54.730: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f container agnhost-container: <nil>
STEP: delete the pod
Oct  6 23:54:54.798: INFO: Waiting for pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f to disappear
Oct  6 23:54:54.825: INFO: Pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f no longer exists
STEP: Creating a pod to test service account token: 
Oct  6 23:54:54.849: INFO: Waiting up to 5m0s for pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f" in namespace "svcaccounts-180" to be "Succeeded or Failed"
Oct  6 23:54:54.872: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.624592ms
Oct  6 23:54:56.896: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046445687s
Oct  6 23:54:58.919: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069748102s
Oct  6 23:55:00.954: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105219398s
Oct  6 23:55:02.981: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132133773s
Oct  6 23:55:05.016: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166377882s
Oct  6 23:55:07.040: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.191145117s
Oct  6 23:55:09.066: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.216410806s
STEP: Saw pod success
Oct  6 23:55:09.066: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f" satisfied condition "Succeeded or Failed"
Oct  6 23:55:09.089: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f container agnhost-container: <nil>
STEP: delete the pod
Oct  6 23:55:09.152: INFO: Waiting for pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f to disappear
Oct  6 23:55:09.183: INFO: Pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f no longer exists
STEP: Creating a pod to test service account token: 
Oct  6 23:55:09.209: INFO: Waiting up to 5m0s for pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f" in namespace "svcaccounts-180" to be "Succeeded or Failed"
Oct  6 23:55:09.232: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.913221ms
Oct  6 23:55:11.256: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04696396s
Oct  6 23:55:13.284: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074759418s
Oct  6 23:55:15.439: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.22992192s
STEP: Saw pod success
Oct  6 23:55:15.439: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f" satisfied condition "Succeeded or Failed"
Oct  6 23:55:15.487: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f container agnhost-container: <nil>
STEP: delete the pod
Oct  6 23:55:15.613: INFO: Waiting for pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f to disappear
Oct  6 23:55:15.655: INFO: Pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f no longer exists
STEP: Creating a pod to test service account token: 
Oct  6 23:55:15.695: INFO: Waiting up to 5m0s for pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f" in namespace "svcaccounts-180" to be "Succeeded or Failed"
Oct  6 23:55:15.738: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 42.936321ms
Oct  6 23:55:17.761: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066390412s
Oct  6 23:55:19.790: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095192192s
Oct  6 23:55:21.832: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.137074682s
STEP: Saw pod success
Oct  6 23:55:21.832: INFO: Pod "test-pod-68d363e8-dca6-4189-8dea-457195e1390f" satisfied condition "Succeeded or Failed"
Oct  6 23:55:21.863: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f container agnhost-container: <nil>
STEP: delete the pod
Oct  6 23:55:21.934: INFO: Waiting for pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f to disappear
Oct  6 23:55:21.957: INFO: Pod test-pod-68d363e8-dca6-4189-8dea-457195e1390f no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 65 lines ...
Oct  6 23:55:11.683: INFO: PersistentVolumeClaim pvc-zj5xm found but phase is Pending instead of Bound.
Oct  6 23:55:13.712: INFO: PersistentVolumeClaim pvc-zj5xm found and phase=Bound (14.212165585s)
Oct  6 23:55:13.712: INFO: Waiting up to 3m0s for PersistentVolume local-cqgv2 to have phase Bound
Oct  6 23:55:13.743: INFO: PersistentVolume local-cqgv2 found and phase=Bound (30.494221ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-l68w
STEP: Creating a pod to test subpath
Oct  6 23:55:13.857: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-l68w" in namespace "provisioning-3861" to be "Succeeded or Failed"
Oct  6 23:55:13.913: INFO: Pod "pod-subpath-test-preprovisionedpv-l68w": Phase="Pending", Reason="", readiness=false. Elapsed: 55.907048ms
Oct  6 23:55:15.943: INFO: Pod "pod-subpath-test-preprovisionedpv-l68w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086023382s
Oct  6 23:55:17.970: INFO: Pod "pod-subpath-test-preprovisionedpv-l68w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112190091s
Oct  6 23:55:19.999: INFO: Pod "pod-subpath-test-preprovisionedpv-l68w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141530189s
Oct  6 23:55:22.032: INFO: Pod "pod-subpath-test-preprovisionedpv-l68w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.175005583s
STEP: Saw pod success
Oct  6 23:55:22.033: INFO: Pod "pod-subpath-test-preprovisionedpv-l68w" satisfied condition "Succeeded or Failed"
Oct  6 23:55:22.063: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-subpath-test-preprovisionedpv-l68w container test-container-subpath-preprovisionedpv-l68w: <nil>
STEP: delete the pod
Oct  6 23:55:22.162: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-l68w to disappear
Oct  6 23:55:22.189: INFO: Pod pod-subpath-test-preprovisionedpv-l68w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-l68w
Oct  6 23:55:22.189: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-l68w" in namespace "provisioning-3861"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:22.886: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":57,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:55:21.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":9,"skipped":57,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Oct  6 23:55:10.605: INFO: PersistentVolumeClaim pvc-6b7kc found but phase is Pending instead of Bound.
Oct  6 23:55:12.643: INFO: PersistentVolumeClaim pvc-6b7kc found and phase=Bound (14.251880834s)
Oct  6 23:55:12.643: INFO: Waiting up to 3m0s for PersistentVolume local-sjx7g to have phase Bound
Oct  6 23:55:12.676: INFO: PersistentVolume local-sjx7g found and phase=Bound (32.601412ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-m5g4
STEP: Creating a pod to test subpath
Oct  6 23:55:12.808: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-m5g4" in namespace "provisioning-3439" to be "Succeeded or Failed"
Oct  6 23:55:12.856: INFO: Pod "pod-subpath-test-preprovisionedpv-m5g4": Phase="Pending", Reason="", readiness=false. Elapsed: 48.58579ms
Oct  6 23:55:14.887: INFO: Pod "pod-subpath-test-preprovisionedpv-m5g4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079401648s
Oct  6 23:55:16.919: INFO: Pod "pod-subpath-test-preprovisionedpv-m5g4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111124864s
Oct  6 23:55:18.957: INFO: Pod "pod-subpath-test-preprovisionedpv-m5g4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149476429s
Oct  6 23:55:21.033: INFO: Pod "pod-subpath-test-preprovisionedpv-m5g4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.225065862s
STEP: Saw pod success
Oct  6 23:55:21.033: INFO: Pod "pod-subpath-test-preprovisionedpv-m5g4" satisfied condition "Succeeded or Failed"
Oct  6 23:55:21.130: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-subpath-test-preprovisionedpv-m5g4 container test-container-subpath-preprovisionedpv-m5g4: <nil>
STEP: delete the pod
Oct  6 23:55:21.262: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-m5g4 to disappear
Oct  6 23:55:21.287: INFO: Pod pod-subpath-test-preprovisionedpv-m5g4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-m5g4
Oct  6 23:55:21.288: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-m5g4" in namespace "provisioning-3439"
STEP: Creating pod pod-subpath-test-preprovisionedpv-m5g4
STEP: Creating a pod to test subpath
Oct  6 23:55:21.367: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-m5g4" in namespace "provisioning-3439" to be "Succeeded or Failed"
Oct  6 23:55:21.403: INFO: Pod "pod-subpath-test-preprovisionedpv-m5g4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.023302ms
Oct  6 23:55:23.430: INFO: Pod "pod-subpath-test-preprovisionedpv-m5g4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062706892s
Oct  6 23:55:25.465: INFO: Pod "pod-subpath-test-preprovisionedpv-m5g4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098589686s
Oct  6 23:55:27.494: INFO: Pod "pod-subpath-test-preprovisionedpv-m5g4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127363765s
STEP: Saw pod success
Oct  6 23:55:27.494: INFO: Pod "pod-subpath-test-preprovisionedpv-m5g4" satisfied condition "Succeeded or Failed"
Oct  6 23:55:27.519: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-subpath-test-preprovisionedpv-m5g4 container test-container-subpath-preprovisionedpv-m5g4: <nil>
STEP: delete the pod
Oct  6 23:55:27.671: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-m5g4 to disappear
Oct  6 23:55:27.695: INFO: Pod pod-subpath-test-preprovisionedpv-m5g4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-m5g4
Oct  6 23:55:27.695: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-m5g4" in namespace "provisioning-3439"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":9,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":78,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Oct  6 23:54:37.505: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-14985hwd9
STEP: creating a claim
Oct  6 23:54:37.529: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-89cd
STEP: Creating a pod to test subpath
Oct  6 23:54:37.605: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-89cd" in namespace "provisioning-1498" to be "Succeeded or Failed"
Oct  6 23:54:37.630: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.202603ms
Oct  6 23:54:39.657: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051847198s
Oct  6 23:54:41.741: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135167962s
Oct  6 23:54:43.792: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186759274s
Oct  6 23:54:45.886: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280124565s
Oct  6 23:54:47.939: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.333596654s
Oct  6 23:54:49.965: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.359291305s
Oct  6 23:54:51.991: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.385628017s
Oct  6 23:54:54.018: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.412062463s
Oct  6 23:54:56.092: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.486538243s
Oct  6 23:54:58.118: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.512950574s
STEP: Saw pod success
Oct  6 23:54:58.119: INFO: Pod "pod-subpath-test-dynamicpv-89cd" satisfied condition "Succeeded or Failed"
Oct  6 23:54:58.150: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-subpath-test-dynamicpv-89cd container test-container-subpath-dynamicpv-89cd: <nil>
STEP: delete the pod
Oct  6 23:54:58.282: INFO: Waiting for pod pod-subpath-test-dynamicpv-89cd to disappear
Oct  6 23:54:58.331: INFO: Pod pod-subpath-test-dynamicpv-89cd no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-89cd
Oct  6 23:54:58.332: INFO: Deleting pod "pod-subpath-test-dynamicpv-89cd" in namespace "provisioning-1498"
STEP: Creating pod pod-subpath-test-dynamicpv-89cd
STEP: Creating a pod to test subpath
Oct  6 23:54:58.438: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-89cd" in namespace "provisioning-1498" to be "Succeeded or Failed"
Oct  6 23:54:58.472: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.343047ms
Oct  6 23:55:00.497: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059697389s
Oct  6 23:55:02.522: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084120139s
Oct  6 23:55:04.546: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108598246s
Oct  6 23:55:06.578: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139872596s
Oct  6 23:55:08.603: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.165161517s
Oct  6 23:55:10.630: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.192433749s
Oct  6 23:55:12.673: INFO: Pod "pod-subpath-test-dynamicpv-89cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.235763635s
STEP: Saw pod success
Oct  6 23:55:12.674: INFO: Pod "pod-subpath-test-dynamicpv-89cd" satisfied condition "Succeeded or Failed"
Oct  6 23:55:12.714: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-subpath-test-dynamicpv-89cd container test-container-subpath-dynamicpv-89cd: <nil>
STEP: delete the pod
Oct  6 23:55:12.874: INFO: Waiting for pod pod-subpath-test-dynamicpv-89cd to disappear
Oct  6 23:55:12.905: INFO: Pod pod-subpath-test-dynamicpv-89cd no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-89cd
Oct  6 23:55:12.905: INFO: Deleting pod "pod-subpath-test-dynamicpv-89cd" in namespace "provisioning-1498"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:33.383: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":4,"skipped":52,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":59,"failed":0}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:55:22.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
• [SLOW TEST:12.571 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":8,"skipped":59,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:55:34.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":9,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:34.707: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 87 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":12,"skipped":71,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:34.787: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 79 lines ...
Oct  6 23:55:36.839: INFO: Waiting for PV local-pv29cm6 to bind to PVC pvc-r92c7
Oct  6 23:55:36.839: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-r92c7] to have phase Bound
Oct  6 23:55:36.870: INFO: PersistentVolumeClaim pvc-r92c7 found but phase is Pending instead of Bound.
Oct  6 23:55:38.896: INFO: PersistentVolumeClaim pvc-r92c7 found and phase=Bound (2.057356307s)
Oct  6 23:55:38.896: INFO: Waiting up to 3m0s for PersistentVolume local-pv29cm6 to have phase Bound
Oct  6 23:55:38.923: INFO: PersistentVolume local-pv29cm6 found and phase=Bound (27.293861ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
Oct  6 23:55:39.031: INFO: Waiting up to 5m0s for pod "pod-95d0e2e8-29df-4270-800d-03a1fd8d6bd0" in namespace "persistent-local-volumes-test-3993" to be "Unschedulable"
Oct  6 23:55:39.061: INFO: Pod "pod-95d0e2e8-29df-4270-800d-03a1fd8d6bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 29.065853ms
Oct  6 23:55:39.061: INFO: Pod "pod-95d0e2e8-29df-4270-800d-03a1fd8d6bd0" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:11.236 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeSelector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":10,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:55:33.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct  6 23:55:33.931: INFO: Waiting up to 5m0s for pod "security-context-2a262d56-8550-4687-9297-c26512653ff2" in namespace "security-context-5181" to be "Succeeded or Failed"
Oct  6 23:55:33.967: INFO: Pod "security-context-2a262d56-8550-4687-9297-c26512653ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 36.253802ms
Oct  6 23:55:35.991: INFO: Pod "security-context-2a262d56-8550-4687-9297-c26512653ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060396859s
Oct  6 23:55:38.016: INFO: Pod "security-context-2a262d56-8550-4687-9297-c26512653ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084678296s
Oct  6 23:55:40.050: INFO: Pod "security-context-2a262d56-8550-4687-9297-c26512653ff2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.119014159s
STEP: Saw pod success
Oct  6 23:55:40.050: INFO: Pod "security-context-2a262d56-8550-4687-9297-c26512653ff2" satisfied condition "Succeeded or Failed"
Oct  6 23:55:40.077: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod security-context-2a262d56-8550-4687-9297-c26512653ff2 container test-container: <nil>
STEP: delete the pod
Oct  6 23:55:40.156: INFO: Waiting for pod security-context-2a262d56-8550-4687-9297-c26512653ff2 to disappear
Oct  6 23:55:40.186: INFO: Pod security-context-2a262d56-8550-4687-9297-c26512653ff2 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.512 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":5,"skipped":56,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 38 lines ...
STEP: Deleting pod hostexec-nodes-us-west3-a-v32d-wg9mg in namespace volumemode-2087
Oct  6 23:55:27.973: INFO: Deleting pod "pod-80152eff-4f49-4b85-aadc-15edd8f7861b" in namespace "volumemode-2087"
Oct  6 23:55:28.009: INFO: Wait up to 5m0s for pod "pod-80152eff-4f49-4b85-aadc-15edd8f7861b" to be fully deleted
STEP: Deleting pv and pvc
Oct  6 23:55:34.072: INFO: Deleting PersistentVolumeClaim "pvc-2ppw6"
Oct  6 23:55:34.101: INFO: Deleting PersistentVolume "gcepd-m7z9f"
Oct  6 23:55:34.669: INFO: error deleting PD "e2e-30d46949-04bc-4378-a356-182cb21155c0": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-30d46949-04bc-4378-a356-182cb21155c0' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d', resourceInUseByAnotherResource
Oct  6 23:55:34.669: INFO: Couldn't delete PD "e2e-30d46949-04bc-4378-a356-182cb21155c0", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-30d46949-04bc-4378-a356-182cb21155c0' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-v32d', resourceInUseByAnotherResource
Oct  6 23:55:41.520: INFO: Successfully deleted PD "e2e-30d46949-04bc-4378-a356-182cb21155c0".
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:55:41.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-2087" for this suite.

... skipping 33 lines ...
Oct  6 23:55:25.447: INFO: PersistentVolumeClaim pvc-j72k5 found but phase is Pending instead of Bound.
Oct  6 23:55:27.473: INFO: PersistentVolumeClaim pvc-j72k5 found and phase=Bound (8.160187754s)
Oct  6 23:55:27.473: INFO: Waiting up to 3m0s for PersistentVolume local-mvqr8 to have phase Bound
Oct  6 23:55:27.498: INFO: PersistentVolume local-mvqr8 found and phase=Bound (25.374077ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fp2d
STEP: Creating a pod to test subpath
Oct  6 23:55:27.609: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fp2d" in namespace "provisioning-565" to be "Succeeded or Failed"
Oct  6 23:55:27.666: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.953297ms
Oct  6 23:55:29.693: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083464652s
Oct  6 23:55:31.720: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110210835s
Oct  6 23:55:33.748: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138775682s
Oct  6 23:55:35.780: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170913403s
Oct  6 23:55:37.807: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.197332575s
Oct  6 23:55:39.844: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.234731319s
Oct  6 23:55:41.882: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.272975625s
STEP: Saw pod success
Oct  6 23:55:41.883: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2d" satisfied condition "Succeeded or Failed"
Oct  6 23:55:41.909: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-subpath-test-preprovisionedpv-fp2d container test-container-volume-preprovisionedpv-fp2d: <nil>
STEP: delete the pod
Oct  6 23:55:41.980: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fp2d to disappear
Oct  6 23:55:42.006: INFO: Pod pod-subpath-test-preprovisionedpv-fp2d no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fp2d
Oct  6 23:55:42.006: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fp2d" in namespace "provisioning-565"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":17,"skipped":148,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:42.707: INFO: Only supported for providers [openstack] (not gce)
... skipping 173 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":6,"skipped":75,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":10,"skipped":54,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:55:41.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:249
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:55:43.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 29 lines ...
• [SLOW TEST:75.818 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:239
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":8,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:44.405: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 147 lines ...
• [SLOW TEST:10.477 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":13,"skipped":83,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...
• [SLOW TEST:27.333 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":13,"skipped":105,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:47.986: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 169 lines ...
      Driver local doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":11,"skipped":54,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:55:43.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct  6 23:55:44.041: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-33bd7c8e-2e38-41c8-a5f2-b76fad68d7b0" in namespace "security-context-test-5764" to be "Succeeded or Failed"
Oct  6 23:55:44.079: INFO: Pod "busybox-privileged-false-33bd7c8e-2e38-41c8-a5f2-b76fad68d7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 38.602679ms
Oct  6 23:55:46.108: INFO: Pod "busybox-privileged-false-33bd7c8e-2e38-41c8-a5f2-b76fad68d7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067339184s
Oct  6 23:55:48.138: INFO: Pod "busybox-privileged-false-33bd7c8e-2e38-41c8-a5f2-b76fad68d7b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096994578s
Oct  6 23:55:48.138: INFO: Pod "busybox-privileged-false-33bd7c8e-2e38-41c8-a5f2-b76fad68d7b0" satisfied condition "Succeeded or Failed"
Oct  6 23:55:48.214: INFO: Got logs for pod "busybox-privileged-false-33bd7c8e-2e38-41c8-a5f2-b76fad68d7b0": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:55:48.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5764" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":54,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:48.315: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 96 lines ...
Oct  6 23:54:09.567: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-8277rdpv5      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/gce-pd,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-8277    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8277rdpv5,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-8277    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8277rdpv5,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-8277    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8277rdpv5,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-8277rdpv5    fb1e0828-ea40-47f3-ab23-878c7e07e813 8464 0 2021-10-06 23:54:09 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-10-06 23:54:09 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/gce-pd,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-khfl7 pvc- provisioning-8277  49675a41-1a73-403e-b12c-42cb06f57cd6 8472 0 2021-10-06 23:54:09 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-10-06 23:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8277rdpv5,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-e8f98c48-4756-401a-9ec9-6c338d39c7f5 in namespace provisioning-8277
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Oct  6 23:54:29.846: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-kw9gk" in namespace "provisioning-8277" to be "Succeeded or Failed"
Oct  6 23:54:29.870: INFO: Pod "pvc-volume-tester-writer-kw9gk": Phase="Pending", Reason="", readiness=false. Elapsed: 23.718385ms
Oct  6 23:54:31.898: INFO: Pod "pvc-volume-tester-writer-kw9gk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05166534s
Oct  6 23:54:33.921: INFO: Pod "pvc-volume-tester-writer-kw9gk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074762276s
Oct  6 23:54:35.946: INFO: Pod "pvc-volume-tester-writer-kw9gk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099622715s
Oct  6 23:54:37.969: INFO: Pod "pvc-volume-tester-writer-kw9gk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123003637s
Oct  6 23:54:39.993: INFO: Pod "pvc-volume-tester-writer-kw9gk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.146997404s
... skipping 11 lines ...
Oct  6 23:55:04.404: INFO: Pod "pvc-volume-tester-writer-kw9gk": Phase="Pending", Reason="", readiness=false. Elapsed: 34.557603607s
Oct  6 23:55:06.429: INFO: Pod "pvc-volume-tester-writer-kw9gk": Phase="Pending", Reason="", readiness=false. Elapsed: 36.583099504s
Oct  6 23:55:08.455: INFO: Pod "pvc-volume-tester-writer-kw9gk": Phase="Pending", Reason="", readiness=false. Elapsed: 38.608848748s
Oct  6 23:55:10.479: INFO: Pod "pvc-volume-tester-writer-kw9gk": Phase="Pending", Reason="", readiness=false. Elapsed: 40.633272988s
Oct  6 23:55:12.508: INFO: Pod "pvc-volume-tester-writer-kw9gk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.661705738s
STEP: Saw pod success
Oct  6 23:55:12.508: INFO: Pod "pvc-volume-tester-writer-kw9gk" satisfied condition "Succeeded or Failed"
Oct  6 23:55:12.567: INFO: Pod pvc-volume-tester-writer-kw9gk has the following logs: 
Oct  6 23:55:12.567: INFO: Deleting pod "pvc-volume-tester-writer-kw9gk" in namespace "provisioning-8277"
Oct  6 23:55:12.620: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-kw9gk" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "nodes-us-west3-a-87xh"
Oct  6 23:55:12.775: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-mb8jd" in namespace "provisioning-8277" to be "Succeeded or Failed"
Oct  6 23:55:12.823: INFO: Pod "pvc-volume-tester-reader-mb8jd": Phase="Pending", Reason="", readiness=false. Elapsed: 48.487924ms
Oct  6 23:55:14.856: INFO: Pod "pvc-volume-tester-reader-mb8jd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08149374s
Oct  6 23:55:16.883: INFO: Pod "pvc-volume-tester-reader-mb8jd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107774203s
Oct  6 23:55:18.921: INFO: Pod "pvc-volume-tester-reader-mb8jd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146287746s
STEP: Saw pod success
Oct  6 23:55:18.921: INFO: Pod "pvc-volume-tester-reader-mb8jd" satisfied condition "Succeeded or Failed"
Oct  6 23:55:19.016: INFO: Pod pvc-volume-tester-reader-mb8jd has the following logs: hello world

Oct  6 23:55:19.016: INFO: Deleting pod "pvc-volume-tester-reader-mb8jd" in namespace "provisioning-8277"
Oct  6 23:55:19.108: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-mb8jd" to be fully deleted
Oct  6 23:55:19.151: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-khfl7] to have phase Bound
Oct  6 23:55:19.186: INFO: PersistentVolumeClaim pvc-khfl7 found and phase=Bound (34.478493ms)
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":13,"skipped":138,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:49.719: INFO: Only supported for providers [openstack] (not gce)
... skipping 71 lines ...
• [SLOW TEST:9.615 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":9,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 60 lines ...
Oct  6 23:54:30.985: INFO: Terminating ReplicationController up-down-1 pods took: 101.03986ms
STEP: verifying service up-down-1 is not up
Oct  6 23:54:36.627: INFO: Creating new host exec pod
Oct  6 23:54:36.678: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  6 23:54:38.703: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct  6 23:54:40.702: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct  6 23:54:40.702: INFO: Running '/tmp/kubectl2777438504/kubectl --server=https://34.106.187.92 --kubeconfig=/root/.kube/config --namespace=services-1726 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.148.195:80 && echo service-down-failed'
Oct  6 23:54:43.083: INFO: rc: 28
Oct  6 23:54:43.083: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.68.148.195:80 && echo service-down-failed" in pod services-1726/verify-service-down-host-exec-pod: error running /tmp/kubectl2777438504/kubectl --server=https://34.106.187.92 --kubeconfig=/root/.kube/config --namespace=services-1726 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.148.195:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.68.148.195:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1726
STEP: verifying service up-down-2 is still up
Oct  6 23:54:43.213: INFO: Creating new host exec pod
Oct  6 23:54:43.303: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
... skipping 72 lines ...
• [SLOW TEST:126.432 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1036
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":5,"skipped":64,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:54.971: INFO: Driver local doesn't support ext4 -- skipping
... skipping 67 lines ...
Oct  6 23:55:48.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct  6 23:55:48.352: INFO: Waiting up to 5m0s for pod "pod-1b85c4a8-5623-48f7-acd5-598f39ad01aa" in namespace "emptydir-7420" to be "Succeeded or Failed"
Oct  6 23:55:48.382: INFO: Pod "pod-1b85c4a8-5623-48f7-acd5-598f39ad01aa": Phase="Pending", Reason="", readiness=false. Elapsed: 29.859841ms
Oct  6 23:55:50.421: INFO: Pod "pod-1b85c4a8-5623-48f7-acd5-598f39ad01aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068724484s
Oct  6 23:55:52.451: INFO: Pod "pod-1b85c4a8-5623-48f7-acd5-598f39ad01aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098667098s
Oct  6 23:55:54.574: INFO: Pod "pod-1b85c4a8-5623-48f7-acd5-598f39ad01aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.221456584s
STEP: Saw pod success
Oct  6 23:55:54.574: INFO: Pod "pod-1b85c4a8-5623-48f7-acd5-598f39ad01aa" satisfied condition "Succeeded or Failed"
Oct  6 23:55:54.716: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-1b85c4a8-5623-48f7-acd5-598f39ad01aa container test-container: <nil>
STEP: delete the pod
Oct  6 23:55:54.910: INFO: Waiting for pod pod-1b85c4a8-5623-48f7-acd5-598f39ad01aa to disappear
Oct  6 23:55:54.938: INFO: Pod pod-1b85c4a8-5623-48f7-acd5-598f39ad01aa no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":130,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:55.061: INFO: Only supported for providers [openstack] (not gce)
... skipping 107 lines ...
• [SLOW TEST:25.019 seconds]
[sig-network] HostPort
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":83,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:57.449: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 240 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:673
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:688
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":10,"skipped":96,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:58.729: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 144 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":8,"skipped":70,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:55:59.720: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct  6 23:55:54.891: INFO: Waiting up to 5m0s for pod "pod-8db5065c-7f9c-4459-9bb2-9cc051fd4cfa" in namespace "emptydir-8971" to be "Succeeded or Failed"
Oct  6 23:55:54.928: INFO: Pod "pod-8db5065c-7f9c-4459-9bb2-9cc051fd4cfa": Phase="Pending", Reason="", readiness=false. Elapsed: 37.358462ms
Oct  6 23:55:57.003: INFO: Pod "pod-8db5065c-7f9c-4459-9bb2-9cc051fd4cfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111590899s
Oct  6 23:55:59.037: INFO: Pod "pod-8db5065c-7f9c-4459-9bb2-9cc051fd4cfa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146484739s
Oct  6 23:56:01.072: INFO: Pod "pod-8db5065c-7f9c-4459-9bb2-9cc051fd4cfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.180977519s
STEP: Saw pod success
Oct  6 23:56:01.072: INFO: Pod "pod-8db5065c-7f9c-4459-9bb2-9cc051fd4cfa" satisfied condition "Succeeded or Failed"
Oct  6 23:56:01.099: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-8db5065c-7f9c-4459-9bb2-9cc051fd4cfa container test-container: <nil>
STEP: delete the pod
Oct  6 23:56:01.189: INFO: Waiting for pod pod-8db5065c-7f9c-4459-9bb2-9cc051fd4cfa to disappear
Oct  6 23:56:01.214: INFO: Pod pod-8db5065c-7f9c-4459-9bb2-9cc051fd4cfa no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is non-root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":10,"skipped":72,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:110.277 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":8,"skipped":49,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:01.506: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
Oct  6 23:55:56.465: INFO: PersistentVolumeClaim pvc-6b8hs found but phase is Pending instead of Bound.
Oct  6 23:55:58.491: INFO: PersistentVolumeClaim pvc-6b8hs found and phase=Bound (14.401854388s)
Oct  6 23:55:58.491: INFO: Waiting up to 3m0s for PersistentVolume local-njz4c to have phase Bound
Oct  6 23:55:58.524: INFO: PersistentVolume local-njz4c found and phase=Bound (32.973249ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-vd8r
STEP: Creating a pod to test exec-volume-test
Oct  6 23:55:58.602: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-vd8r" in namespace "volume-2486" to be "Succeeded or Failed"
Oct  6 23:55:58.636: INFO: Pod "exec-volume-test-preprovisionedpv-vd8r": Phase="Pending", Reason="", readiness=false. Elapsed: 34.162919ms
Oct  6 23:56:00.665: INFO: Pod "exec-volume-test-preprovisionedpv-vd8r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062586044s
Oct  6 23:56:02.691: INFO: Pod "exec-volume-test-preprovisionedpv-vd8r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088770566s
STEP: Saw pod success
Oct  6 23:56:02.691: INFO: Pod "exec-volume-test-preprovisionedpv-vd8r" satisfied condition "Succeeded or Failed"
Oct  6 23:56:02.717: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod exec-volume-test-preprovisionedpv-vd8r container exec-container-preprovisionedpv-vd8r: <nil>
STEP: delete the pod
Oct  6 23:56:02.789: INFO: Waiting for pod exec-volume-test-preprovisionedpv-vd8r to disappear
Oct  6 23:56:02.820: INFO: Pod exec-volume-test-preprovisionedpv-vd8r no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-vd8r
Oct  6 23:56:02.820: INFO: Deleting pod "exec-volume-test-preprovisionedpv-vd8r" in namespace "volume-2486"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:03.272: INFO: Only supported for providers [azure] (not gce)
... skipping 152 lines ...
Oct  6 23:55:26.899: INFO: PersistentVolumeClaim csi-hostpathd2gnp found but phase is Pending instead of Bound.
Oct  6 23:55:28.928: INFO: PersistentVolumeClaim csi-hostpathd2gnp found but phase is Pending instead of Bound.
Oct  6 23:55:30.952: INFO: PersistentVolumeClaim csi-hostpathd2gnp found but phase is Pending instead of Bound.
Oct  6 23:55:33.023: INFO: PersistentVolumeClaim csi-hostpathd2gnp found and phase=Bound (10.246147866s)
STEP: Creating pod pod-subpath-test-dynamicpv-hs6d
STEP: Creating a pod to test subpath
Oct  6 23:55:33.134: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hs6d" in namespace "provisioning-1800" to be "Succeeded or Failed"
Oct  6 23:55:33.165: INFO: Pod "pod-subpath-test-dynamicpv-hs6d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.926955ms
Oct  6 23:55:35.194: INFO: Pod "pod-subpath-test-dynamicpv-hs6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060597842s
Oct  6 23:55:37.225: INFO: Pod "pod-subpath-test-dynamicpv-hs6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091759378s
Oct  6 23:55:39.251: INFO: Pod "pod-subpath-test-dynamicpv-hs6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117181722s
Oct  6 23:55:41.276: INFO: Pod "pod-subpath-test-dynamicpv-hs6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142636933s
Oct  6 23:55:43.309: INFO: Pod "pod-subpath-test-dynamicpv-hs6d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.17524493s
Oct  6 23:55:45.340: INFO: Pod "pod-subpath-test-dynamicpv-hs6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.20686095s
STEP: Saw pod success
Oct  6 23:55:45.341: INFO: Pod "pod-subpath-test-dynamicpv-hs6d" satisfied condition "Succeeded or Failed"
Oct  6 23:55:45.367: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-subpath-test-dynamicpv-hs6d container test-container-subpath-dynamicpv-hs6d: <nil>
STEP: delete the pod
Oct  6 23:55:45.484: INFO: Waiting for pod pod-subpath-test-dynamicpv-hs6d to disappear
Oct  6 23:55:45.558: INFO: Pod pod-subpath-test-dynamicpv-hs6d no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-hs6d
Oct  6 23:55:45.559: INFO: Deleting pod "pod-subpath-test-dynamicpv-hs6d" in namespace "provisioning-1800"
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":40,"failed":1,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":18,"skipped":151,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:07.609: INFO: Only supported for providers [azure] (not gce)
... skipping 102 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Oct  6 23:56:06.259: INFO: Waiting up to 5m0s for pod "pod-7feb01ba-f9d8-490f-9bc9-13979d9f8d30" in namespace "emptydir-8244" to be "Succeeded or Failed"
Oct  6 23:56:06.284: INFO: Pod "pod-7feb01ba-f9d8-490f-9bc9-13979d9f8d30": Phase="Pending", Reason="", readiness=false. Elapsed: 25.242307ms
Oct  6 23:56:08.310: INFO: Pod "pod-7feb01ba-f9d8-490f-9bc9-13979d9f8d30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051596614s
Oct  6 23:56:10.334: INFO: Pod "pod-7feb01ba-f9d8-490f-9bc9-13979d9f8d30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075843369s
Oct  6 23:56:12.362: INFO: Pod "pod-7feb01ba-f9d8-490f-9bc9-13979d9f8d30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.102951641s
STEP: Saw pod success
Oct  6 23:56:12.362: INFO: Pod "pod-7feb01ba-f9d8-490f-9bc9-13979d9f8d30" satisfied condition "Succeeded or Failed"
Oct  6 23:56:12.386: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-7feb01ba-f9d8-490f-9bc9-13979d9f8d30 container test-container: <nil>
STEP: delete the pod
Oct  6 23:56:12.458: INFO: Waiting for pod pod-7feb01ba-f9d8-490f-9bc9-13979d9f8d30 to disappear
Oct  6 23:56:12.484: INFO: Pod pod-7feb01ba-f9d8-490f-9bc9-13979d9f8d30 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on default medium should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":11,"skipped":41,"failed":1,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:12.545: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
STEP: Creating pod
Oct  6 23:55:30.192: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct  6 23:55:30.221: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-xzd8g] to have phase Bound
Oct  6 23:55:30.257: INFO: PersistentVolumeClaim pvc-xzd8g found but phase is Pending instead of Bound.
Oct  6 23:55:32.286: INFO: PersistentVolumeClaim pvc-xzd8g found and phase=Bound (2.065472283s)
STEP: checking for CSIInlineVolumes feature
Oct  6 23:55:46.494: INFO: Error getting logs for pod inline-volume-7zm4t: the server rejected our request for an unknown reason (get pods inline-volume-7zm4t)
Oct  6 23:55:46.544: INFO: Deleting pod "inline-volume-7zm4t" in namespace "csi-mock-volumes-8667"
Oct  6 23:55:46.581: INFO: Wait up to 5m0s for pod "inline-volume-7zm4t" to be fully deleted
STEP: Deleting the previously created pod
Oct  6 23:55:56.644: INFO: Deleting pod "pvc-volume-tester-4g4sc" in namespace "csi-mock-volumes-8667"
Oct  6 23:55:56.693: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4g4sc" to be fully deleted
STEP: Checking CSI driver logs
Oct  6 23:55:58.861: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-4g4sc
Oct  6 23:55:58.861: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-8667
Oct  6 23:55:58.861: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 2a4cbbfa-db78-472a-aab0-af9cffc44c0b
Oct  6 23:55:58.861: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Oct  6 23:55:58.861: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Oct  6 23:55:58.861: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2a4cbbfa-db78-472a-aab0-af9cffc44c0b/volumes/kubernetes.io~csi/pvc-6234dca3-0bb5-4232-ab6f-fe78dd79583f/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-4g4sc
Oct  6 23:55:58.861: INFO: Deleting pod "pvc-volume-tester-4g4sc" in namespace "csi-mock-volumes-8667"
STEP: Deleting claim pvc-xzd8g
Oct  6 23:55:58.969: INFO: Waiting up to 2m0s for PersistentVolume pvc-6234dca3-0bb5-4232-ab6f-fe78dd79583f to get deleted
Oct  6 23:55:59.004: INFO: PersistentVolume pvc-6234dca3-0bb5-4232-ab6f-fe78dd79583f found and phase=Bound (34.744182ms)
Oct  6 23:56:01.036: INFO: PersistentVolume pvc-6234dca3-0bb5-4232-ab6f-fe78dd79583f was removed
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":9,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:14.208: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:56:14.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-8649" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":10,"skipped":71,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:14.954: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Only supported for providers [openstack] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":15,"skipped":139,"failed":0}
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:56:05.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":139,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:15.823: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:56:17.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-4758" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":13,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:17.991: INFO: Only supported for providers [openstack] (not gce)
... skipping 32 lines ...
Oct  6 23:55:28.124: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9552sxv7m
STEP: creating a claim
Oct  6 23:55:28.160: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-g29n
STEP: Creating a pod to test subpath
Oct  6 23:55:28.259: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-g29n" in namespace "provisioning-9552" to be "Succeeded or Failed"
Oct  6 23:55:28.289: INFO: Pod "pod-subpath-test-dynamicpv-g29n": Phase="Pending", Reason="", readiness=false. Elapsed: 29.79498ms
Oct  6 23:55:30.317: INFO: Pod "pod-subpath-test-dynamicpv-g29n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058006248s
Oct  6 23:55:32.345: INFO: Pod "pod-subpath-test-dynamicpv-g29n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085339143s
Oct  6 23:55:34.373: INFO: Pod "pod-subpath-test-dynamicpv-g29n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113829728s
Oct  6 23:55:36.406: INFO: Pod "pod-subpath-test-dynamicpv-g29n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146871915s
Oct  6 23:55:38.433: INFO: Pod "pod-subpath-test-dynamicpv-g29n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.173956646s
... skipping 2 lines ...
Oct  6 23:55:44.517: INFO: Pod "pod-subpath-test-dynamicpv-g29n": Phase="Pending", Reason="", readiness=false. Elapsed: 16.257954417s
Oct  6 23:55:46.544: INFO: Pod "pod-subpath-test-dynamicpv-g29n": Phase="Pending", Reason="", readiness=false. Elapsed: 18.28456044s
Oct  6 23:55:48.574: INFO: Pod "pod-subpath-test-dynamicpv-g29n": Phase="Pending", Reason="", readiness=false. Elapsed: 20.314885602s
Oct  6 23:55:50.609: INFO: Pod "pod-subpath-test-dynamicpv-g29n": Phase="Pending", Reason="", readiness=false. Elapsed: 22.349747011s
Oct  6 23:55:52.641: INFO: Pod "pod-subpath-test-dynamicpv-g29n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.381856564s
STEP: Saw pod success
Oct  6 23:55:52.641: INFO: Pod "pod-subpath-test-dynamicpv-g29n" satisfied condition "Succeeded or Failed"
Oct  6 23:55:52.667: INFO: Trying to get logs from node nodes-us-west3-a-xm8f pod pod-subpath-test-dynamicpv-g29n container test-container-volume-dynamicpv-g29n: <nil>
STEP: delete the pod
Oct  6 23:55:52.749: INFO: Waiting for pod pod-subpath-test-dynamicpv-g29n to disappear
Oct  6 23:55:52.773: INFO: Pod pod-subpath-test-dynamicpv-g29n no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-g29n
Oct  6 23:55:52.774: INFO: Deleting pod "pod-subpath-test-dynamicpv-g29n" in namespace "provisioning-9552"
... skipping 41 lines ...
STEP: Destroying namespace "services-3307" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":62,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":14,"skipped":71,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:18.231: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 74 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 10 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver windows-gcepd doesn't support  -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
... skipping 19 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":11,"skipped":97,"failed":0}
[BeforeEach] [sig-node] PrivilegedPod [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  6 23:56:09.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-privileged-pod
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
• [SLOW TEST:9.090 seconds]
[sig-node] PrivilegedPod [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should enable privileged commands [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49
------------------------------
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":12,"skipped":97,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:18.450: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
Oct  6 23:55:59.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Oct  6 23:55:59.962: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  6 23:56:00.050: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4855" in namespace "provisioning-4855" to be "Succeeded or Failed"
Oct  6 23:56:00.105: INFO: Pod "hostpath-symlink-prep-provisioning-4855": Phase="Pending", Reason="", readiness=false. Elapsed: 54.625385ms
Oct  6 23:56:02.130: INFO: Pod "hostpath-symlink-prep-provisioning-4855": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079913432s
Oct  6 23:56:04.164: INFO: Pod "hostpath-symlink-prep-provisioning-4855": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113593524s
Oct  6 23:56:06.189: INFO: Pod "hostpath-symlink-prep-provisioning-4855": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138692987s
Oct  6 23:56:08.215: INFO: Pod "hostpath-symlink-prep-provisioning-4855": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.164758864s
STEP: Saw pod success
Oct  6 23:56:08.215: INFO: Pod "hostpath-symlink-prep-provisioning-4855" satisfied condition "Succeeded or Failed"
Oct  6 23:56:08.215: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4855" in namespace "provisioning-4855"
Oct  6 23:56:08.260: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4855" to be fully deleted
Oct  6 23:56:08.297: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-hlls
STEP: Creating a pod to test subpath
Oct  6 23:56:08.325: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-hlls" in namespace "provisioning-4855" to be "Succeeded or Failed"
Oct  6 23:56:08.359: INFO: Pod "pod-subpath-test-inlinevolume-hlls": Phase="Pending", Reason="", readiness=false. Elapsed: 33.816656ms
Oct  6 23:56:10.387: INFO: Pod "pod-subpath-test-inlinevolume-hlls": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06182041s
Oct  6 23:56:12.415: INFO: Pod "pod-subpath-test-inlinevolume-hlls": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08976443s
Oct  6 23:56:14.439: INFO: Pod "pod-subpath-test-inlinevolume-hlls": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114176183s
Oct  6 23:56:16.466: INFO: Pod "pod-subpath-test-inlinevolume-hlls": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.141194084s
STEP: Saw pod success
Oct  6 23:56:16.466: INFO: Pod "pod-subpath-test-inlinevolume-hlls" satisfied condition "Succeeded or Failed"
Oct  6 23:56:16.490: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-subpath-test-inlinevolume-hlls container test-container-subpath-inlinevolume-hlls: <nil>
STEP: delete the pod
Oct  6 23:56:16.551: INFO: Waiting for pod pod-subpath-test-inlinevolume-hlls to disappear
Oct  6 23:56:16.573: INFO: Pod pod-subpath-test-inlinevolume-hlls no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-hlls
Oct  6 23:56:16.573: INFO: Deleting pod "pod-subpath-test-inlinevolume-hlls" in namespace "provisioning-4855"
STEP: Deleting pod
Oct  6 23:56:16.596: INFO: Deleting pod "pod-subpath-test-inlinevolume-hlls" in namespace "provisioning-4855"
Oct  6 23:56:16.659: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4855" in namespace "provisioning-4855" to be "Succeeded or Failed"
Oct  6 23:56:16.684: INFO: Pod "hostpath-symlink-prep-provisioning-4855": Phase="Pending", Reason="", readiness=false. Elapsed: 24.417903ms
Oct  6 23:56:18.708: INFO: Pod "hostpath-symlink-prep-provisioning-4855": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.048902399s
STEP: Saw pod success
Oct  6 23:56:18.708: INFO: Pod "hostpath-symlink-prep-provisioning-4855" satisfied condition "Succeeded or Failed"
Oct  6 23:56:18.708: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4855" in namespace "provisioning-4855"
Oct  6 23:56:18.738: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4855" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:56:18.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-4855" for this suite.
... skipping 30 lines ...
Oct  6 23:56:12.150: INFO: PersistentVolumeClaim pvc-hfbjs found but phase is Pending instead of Bound.
Oct  6 23:56:14.174: INFO: PersistentVolumeClaim pvc-hfbjs found and phase=Bound (2.05053483s)
Oct  6 23:56:14.174: INFO: Waiting up to 3m0s for PersistentVolume local-shjqv to have phase Bound
Oct  6 23:56:14.199: INFO: PersistentVolume local-shjqv found and phase=Bound (24.188086ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wfkh
STEP: Creating a pod to test subpath
Oct  6 23:56:14.275: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wfkh" in namespace "provisioning-6732" to be "Succeeded or Failed"
Oct  6 23:56:14.305: INFO: Pod "pod-subpath-test-preprovisionedpv-wfkh": Phase="Pending", Reason="", readiness=false. Elapsed: 29.443689ms
Oct  6 23:56:16.333: INFO: Pod "pod-subpath-test-preprovisionedpv-wfkh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058070037s
Oct  6 23:56:18.358: INFO: Pod "pod-subpath-test-preprovisionedpv-wfkh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082796751s
STEP: Saw pod success
Oct  6 23:56:18.358: INFO: Pod "pod-subpath-test-preprovisionedpv-wfkh" satisfied condition "Succeeded or Failed"
Oct  6 23:56:18.388: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-subpath-test-preprovisionedpv-wfkh container test-container-subpath-preprovisionedpv-wfkh: <nil>
STEP: delete the pod
Oct  6 23:56:18.467: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wfkh to disappear
Oct  6 23:56:18.490: INFO: Pod pod-subpath-test-preprovisionedpv-wfkh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wfkh
Oct  6 23:56:18.490: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wfkh" in namespace "provisioning-6732"
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":19,"skipped":160,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:56:21.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5411" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":20,"skipped":164,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:21.904: INFO: Only supported for providers [openstack] (not gce)
... skipping 55 lines ...
W1006 23:55:33.557201    5691 gce_instances.go:410] Cloud object does not have informers set, should only happen in E2E binary.
Oct  6 23:55:35.186: INFO: Successfully created a new PD: "e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed".
Oct  6 23:55:35.186: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-ds7v
STEP: Creating a pod to test exec-volume-test
W1006 23:55:35.219548    5691 warnings.go:70] spec.nodeSelector[failure-domain.beta.kubernetes.io/zone]: deprecated since v1.17; use "topology.kubernetes.io/zone" instead
Oct  6 23:55:35.219: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-ds7v" in namespace "volume-4746" to be "Succeeded or Failed"
Oct  6 23:55:35.249: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Pending", Reason="", readiness=false. Elapsed: 29.542826ms
Oct  6 23:55:37.273: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054198249s
Oct  6 23:55:39.301: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081333252s
Oct  6 23:55:41.338: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118313314s
Oct  6 23:55:43.364: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144516966s
Oct  6 23:55:45.400: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.181235222s
Oct  6 23:55:47.429: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Pending", Reason="", readiness=false. Elapsed: 12.209831876s
Oct  6 23:55:49.468: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Pending", Reason="", readiness=false. Elapsed: 14.248535243s
Oct  6 23:55:51.522: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Pending", Reason="", readiness=false. Elapsed: 16.302538008s
Oct  6 23:55:53.550: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Pending", Reason="", readiness=false. Elapsed: 18.330428573s
Oct  6 23:55:55.735: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Pending", Reason="", readiness=false. Elapsed: 20.515834363s
Oct  6 23:55:57.763: INFO: Pod "exec-volume-test-inlinevolume-ds7v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.544131969s
STEP: Saw pod success
Oct  6 23:55:57.763: INFO: Pod "exec-volume-test-inlinevolume-ds7v" satisfied condition "Succeeded or Failed"
Oct  6 23:55:57.804: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod exec-volume-test-inlinevolume-ds7v container exec-container-inlinevolume-ds7v: <nil>
STEP: delete the pod
Oct  6 23:55:57.953: INFO: Waiting for pod exec-volume-test-inlinevolume-ds7v to disappear
Oct  6 23:55:58.045: INFO: Pod exec-volume-test-inlinevolume-ds7v no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-ds7v
Oct  6 23:55:58.045: INFO: Deleting pod "exec-volume-test-inlinevolume-ds7v" in namespace "volume-4746"
Oct  6 23:55:58.628: INFO: error deleting PD "e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh', resourceInUseByAnotherResource
Oct  6 23:55:58.628: INFO: Couldn't delete PD "e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh', resourceInUseByAnotherResource
Oct  6 23:56:04.190: INFO: error deleting PD "e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh', resourceInUseByAnotherResource
Oct  6 23:56:04.190: INFO: Couldn't delete PD "e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh', resourceInUseByAnotherResource
Oct  6 23:56:09.764: INFO: error deleting PD "e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh', resourceInUseByAnotherResource
Oct  6 23:56:09.764: INFO: Couldn't delete PD "e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh', resourceInUseByAnotherResource
Oct  6 23:56:15.244: INFO: error deleting PD "e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh', resourceInUseByAnotherResource
Oct  6 23:56:15.244: INFO: Couldn't delete PD "e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/disks/e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed' is already being used by 'projects/k8s-boskos-gce-project-06/zones/us-west3-a/instances/nodes-us-west3-a-87xh', resourceInUseByAnotherResource
Oct  6 23:56:22.190: INFO: Successfully deleted PD "e2e-79fcdaf0-9d88-40e9-9d67-44d72d4607ed".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:56:22.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4746" for this suite.

... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":52,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:56:23.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-7749" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":21,"skipped":170,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 107 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":43,"failed":1,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:24.409: INFO: Only supported for providers [aws] (not gce)
... skipping 106 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":2,"skipped":30,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":17,"skipped":143,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:28.800: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Oct  6 23:56:24.678: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-292" to be "Succeeded or Failed"
Oct  6 23:56:24.702: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 23.268035ms
Oct  6 23:56:26.727: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048585923s
Oct  6 23:56:28.768: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089298941s
STEP: Saw pod success
Oct  6 23:56:28.768: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct  6 23:56:28.795: INFO: Trying to get logs from node nodes-us-west3-a-87xh pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Oct  6 23:56:28.879: INFO: Waiting for pod pod-host-path-test to disappear
Oct  6 23:56:28.905: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:56:28.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-292" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":13,"skipped":67,"failed":1,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:28.980: INFO: Only supported for providers [aws] (not gce)
... skipping 71 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support port-forward
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":3,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:40.183: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:56:40.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2225" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":4,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:40.577: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-083fd155-81b5-4357-926b-09aee726bbe3
STEP: Creating a pod to test consume secrets
Oct  6 23:56:29.220: INFO: Waiting up to 5m0s for pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2" in namespace "secrets-2854" to be "Succeeded or Failed"
Oct  6 23:56:29.249: INFO: Pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.139905ms
Oct  6 23:56:31.275: INFO: Pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055583304s
Oct  6 23:56:33.303: INFO: Pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083017954s
Oct  6 23:56:35.329: INFO: Pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109818657s
Oct  6 23:56:37.358: INFO: Pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138380407s
Oct  6 23:56:39.386: INFO: Pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.16615696s
Oct  6 23:56:41.410: INFO: Pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.190649562s
Oct  6 23:56:43.434: INFO: Pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.214678581s
Oct  6 23:56:45.459: INFO: Pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.23960649s
Oct  6 23:56:47.485: INFO: Pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.265684508s
STEP: Saw pod success
Oct  6 23:56:47.485: INFO: Pod "pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2" satisfied condition "Succeeded or Failed"
Oct  6 23:56:47.509: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2 container secret-volume-test: <nil>
STEP: delete the pod
Oct  6 23:56:47.593: INFO: Waiting for pod pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2 to disappear
Oct  6 23:56:47.618: INFO: Pod pod-secrets-82310e0c-cf5d-4696-97f1-c899798ca9d2 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:18.686 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":69,"failed":1,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:47.693: INFO: Driver windows-gcepd doesn't support ext3 -- skipping
... skipping 40 lines ...
Oct  6 23:56:26.994: INFO: PersistentVolumeClaim pvc-zd576 found but phase is Pending instead of Bound.
Oct  6 23:56:29.028: INFO: PersistentVolumeClaim pvc-zd576 found and phase=Bound (6.112594598s)
Oct  6 23:56:29.028: INFO: Waiting up to 3m0s for PersistentVolume local-9wv8c to have phase Bound
Oct  6 23:56:29.055: INFO: PersistentVolume local-9wv8c found and phase=Bound (27.363561ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-vmp4
STEP: Creating a pod to test exec-volume-test
Oct  6 23:56:29.146: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-vmp4" in namespace "volume-1146" to be "Succeeded or Failed"
Oct  6 23:56:29.170: INFO: Pod "exec-volume-test-preprovisionedpv-vmp4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.136828ms
Oct  6 23:56:31.195: INFO: Pod "exec-volume-test-preprovisionedpv-vmp4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048397838s
Oct  6 23:56:33.220: INFO: Pod "exec-volume-test-preprovisionedpv-vmp4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073360543s
Oct  6 23:56:35.244: INFO: Pod "exec-volume-test-preprovisionedpv-vmp4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097786145s
Oct  6 23:56:37.269: INFO: Pod "exec-volume-test-preprovisionedpv-vmp4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122975179s
Oct  6 23:56:39.295: INFO: Pod "exec-volume-test-preprovisionedpv-vmp4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.148310914s
Oct  6 23:56:41.320: INFO: Pod "exec-volume-test-preprovisionedpv-vmp4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.173394326s
Oct  6 23:56:43.345: INFO: Pod "exec-volume-test-preprovisionedpv-vmp4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.198917942s
Oct  6 23:56:45.369: INFO: Pod "exec-volume-test-preprovisionedpv-vmp4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.223161815s
Oct  6 23:56:47.395: INFO: Pod "exec-volume-test-preprovisionedpv-vmp4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.248373505s
STEP: Saw pod success
Oct  6 23:56:47.395: INFO: Pod "exec-volume-test-preprovisionedpv-vmp4" satisfied condition "Succeeded or Failed"
Oct  6 23:56:47.420: INFO: Trying to get logs from node nodes-us-west3-a-v32d pod exec-volume-test-preprovisionedpv-vmp4 container exec-container-preprovisionedpv-vmp4: <nil>
STEP: delete the pod
Oct  6 23:56:47.491: INFO: Waiting for pod exec-volume-test-preprovisionedpv-vmp4 to disappear
Oct  6 23:56:47.514: INFO: Pod exec-volume-test-preprovisionedpv-vmp4 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-vmp4
Oct  6 23:56:47.514: INFO: Deleting pod "exec-volume-test-preprovisionedpv-vmp4" in namespace "volume-1146"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":66,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:48.066: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 125 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext3)] volumes should store data","total":-1,"completed":7,"skipped":81,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:48.109: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 205 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":14,"skipped":85,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:48.681: INFO: Only supported for providers [azure] (not gce)
... skipping 61 lines ...
Oct  6 23:55:35.920: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6899
Oct  6 23:55:35.946: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6899
Oct  6 23:55:35.970: INFO: creating *v1.StatefulSet: csi-mock-volumes-6899-4278/csi-mockplugin
Oct  6 23:55:36.022: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6899
Oct  6 23:55:36.056: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6899"
Oct  6 23:55:36.093: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6899 to register on node nodes-us-west3-a-vcbk
I1006 23:55:42.668702    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I1006 23:55:42.693021    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6899","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1006 23:55:42.727898    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I1006 23:55:42.771841    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I1006 23:55:42.863826    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6899","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1006 23:55:43.110237    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-6899"},"Error":"","FullError":null}
STEP: Creating pod
Oct  6 23:55:45.840: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct  6 23:55:45.891: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-kr52x] to have phase Bound
Oct  6 23:55:45.952: INFO: PersistentVolumeClaim pvc-kr52x found but phase is Pending instead of Bound.
I1006 23:55:45.962409    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-e5ceea48-bfa5-4526-b488-59732e633d55","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I1006 23:55:45.989983    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-e5ceea48-bfa5-4526-b488-59732e633d55","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-e5ceea48-bfa5-4526-b488-59732e633d55"}}},"Error":"","FullError":null}
Oct  6 23:55:47.985: INFO: PersistentVolumeClaim pvc-kr52x found and phase=Bound (2.093798671s)
I1006 23:55:48.386123    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1006 23:55:48.418256    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  6 23:55:48.444: INFO: >>> kubeConfig: /root/.kube/config
I1006 23:55:48.753065    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e5ceea48-bfa5-4526-b488-59732e633d55/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e5ceea48-bfa5-4526-b488-59732e633d55","storage.kubernetes.io/csiProvisionerIdentity":"1633564542788-8081-csi-mock-csi-mock-volumes-6899"}},"Response":{},"Error":"","FullError":null}
I1006 23:55:49.026138    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1006 23:55:49.049990    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  6 23:55:49.078: INFO: >>> kubeConfig: /root/.kube/config
Oct  6 23:55:49.332: INFO: >>> kubeConfig: /root/.kube/config
Oct  6 23:55:49.616: INFO: >>> kubeConfig: /root/.kube/config
I1006 23:55:49.930600    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e5ceea48-bfa5-4526-b488-59732e633d55/globalmount","target_path":"/var/lib/kubelet/pods/9347ecfe-6899-408a-ab5a-85c755b7db10/volumes/kubernetes.io~csi/pvc-e5ceea48-bfa5-4526-b488-59732e633d55/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e5ceea48-bfa5-4526-b488-59732e633d55","storage.kubernetes.io/csiProvisionerIdentity":"1633564542788-8081-csi-mock-csi-mock-volumes-6899"}},"Response":{},"Error":"","FullError":null}
Oct  6 23:55:52.141: INFO: Deleting pod "pvc-volume-tester-qpsb5" in namespace "csi-mock-volumes-6899"
Oct  6 23:55:52.198: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qpsb5" to be fully deleted
Oct  6 23:55:53.683: INFO: >>> kubeConfig: /root/.kube/config
I1006 23:55:53.980515    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/9347ecfe-6899-408a-ab5a-85c755b7db10/volumes/kubernetes.io~csi/pvc-e5ceea48-bfa5-4526-b488-59732e633d55/mount"},"Response":{},"Error":"","FullError":null}
I1006 23:55:54.082429    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1006 23:55:54.108454    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e5ceea48-bfa5-4526-b488-59732e633d55/globalmount"},"Response":{},"Error":"","FullError":null}
I1006 23:55:56.371567    5701 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Oct  6 23:55:57.309: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kr52x", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6899", SelfLink:"", UID:"e5ceea48-bfa5-4526-b488-59732e633d55", ResourceVersion:"12682", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769161345, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003749410), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003749428), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002171fb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002171fc0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  6 23:55:57.309: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kr52x", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6899", SelfLink:"", UID:"e5ceea48-bfa5-4526-b488-59732e633d55", ResourceVersion:"12684", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769161345, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6899"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00582c0f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00582c108), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00582c120), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00582c138), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002178a50), VolumeMode:(*v1.PersistentVolumeMode)(0xc002178a60), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  6 23:55:57.309: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kr52x", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6899", SelfLink:"", UID:"e5ceea48-bfa5-4526-b488-59732e633d55", ResourceVersion:"12698", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769161345, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6899"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0042ead08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042ead20), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0042ead38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042ead50), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-e5ceea48-bfa5-4526-b488-59732e633d55", StorageClassName:(*string)(0xc002188e40), VolumeMode:(*v1.PersistentVolumeMode)(0xc002188e50), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  6 23:55:57.310: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kr52x", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6899", SelfLink:"", UID:"e5ceea48-bfa5-4526-b488-59732e633d55", ResourceVersion:"12699", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769161345, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6899"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0042ead80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042ead98), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0042eadb0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042eadc8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0042eade0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042eadf8), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-e5ceea48-bfa5-4526-b488-59732e633d55", StorageClassName:(*string)(0xc002188e80), VolumeMode:(*v1.PersistentVolumeMode)(0xc002188e90), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  6 23:55:57.310: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kr52x", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6899", SelfLink:"", UID:"e5ceea48-bfa5-4526-b488-59732e633d55", ResourceVersion:"13133", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769161345, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(0xc0042eae28), DeletionGracePeriodSeconds:(*int64)(0xc0005bdd88), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6899"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0042eae40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042eae58), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0042eae70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042eae88), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0042eaea0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042eaeb8), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-e5ceea48-bfa5-4526-b488-59732e633d55", StorageClassName:(*string)(0xc002188ed0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002188ee0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023
    exhausted, immediate binding
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":10,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:48.774: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  6 23:56:49.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-6509" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":11,"skipped":70,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  6 23:56:49.216: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 37625 lines ...






24415       1 service.go:301] Service svc-latency-2393/latency-svc-8cjfh updated: 1 ports\nI1006 23:58:23.831617       1 service.go:301] Service svc-latency-2393/latency-svc-g9zgv updated: 1 ports\nI1006 23:58:23.848923       1 service.go:301] Service svc-latency-2393/latency-svc-ntzqt updated: 1 ports\nI1006 23:58:23.851062       1 service.go:301] Service svc-latency-2393/latency-svc-r76jp updated: 1 ports\nI1006 23:58:23.931117       1 service.go:301] Service svc-latency-2393/latency-svc-c5kdc updated: 1 ports\nI1006 23:58:23.966175       1 service.go:301] Service svc-latency-2393/latency-svc-kmgds updated: 1 ports\nI1006 23:58:24.023933       1 service.go:301] Service svc-latency-2393/latency-svc-4khm5 updated: 1 ports\nI1006 23:58:24.034980       1 service.go:301] Service svc-latency-2393/latency-svc-9wr74 updated: 1 ports\nI1006 23:58:24.069832       1 service.go:301] Service svc-latency-2393/latency-svc-snxws updated: 1 ports\nI1006 23:58:24.090237       1 service.go:301] Service svc-latency-2393/latency-svc-sg7mj updated: 1 ports\nI1006 23:58:24.107112       1 service.go:301] Service svc-latency-2393/latency-svc-mjtlp updated: 1 ports\nI1006 23:58:24.118696       1 service.go:301] Service svc-latency-2393/latency-svc-kqcz4 updated: 1 ports\nI1006 23:58:24.136936       1 service.go:301] Service svc-latency-2393/latency-svc-4sq28 updated: 1 ports\nI1006 23:58:24.139019       1 service.go:301] Service svc-latency-2393/latency-svc-fq2rs updated: 1 ports\nI1006 23:58:24.156673       1 service.go:301] Service svc-latency-2393/latency-svc-xs5sp updated: 1 ports\nI1006 23:58:24.168851       1 service.go:301] Service svc-latency-2393/latency-svc-rbqdn updated: 1 ports\nI1006 23:58:24.197957       1 service.go:301] Service svc-latency-2393/latency-svc-qs9qp updated: 1 ports\nI1006 23:58:24.204356       1 service.go:301] Service svc-latency-2393/latency-svc-x96jb updated: 1 ports\nI1006 23:58:24.213473       1 service.go:301] Service svc-latency-2393/latency-svc-lmhqf updated: 1 ports\nI1006 23:58:24.216886       1 service.go:301] Service svc-latency-2393/latency-svc-pjp4l updated: 1 ports\nI1006 23:58:24.224350       1 service.go:301] Service svc-latency-2393/latency-svc-rvbxj updated: 1 ports\nI1006 23:58:24.227882       1 service.go:301] Service svc-latency-2393/latency-svc-dtwqp updated: 1 ports\nI1006 23:58:24.242554       1 service.go:301] Service svc-latency-2393/latency-svc-nzkkj updated: 1 ports\nI1006 23:58:24.242914       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-bhkxf\" at 100.69.106.65:80/TCP\nI1006 23:58:24.243099       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-pjp4l\" at 100.66.27.6:80/TCP\nI1006 23:58:24.243240       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-wrpzl\" at 100.64.138.72:80/TCP\nI1006 23:58:24.243381       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-dtwqp\" at 100.71.85.116:80/TCP\nI1006 23:58:24.243518       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-twbnv\" at 100.64.175.60:80/TCP\nI1006 23:58:24.243654       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-tx546\" at 100.71.129.35:80/TCP\nI1006 23:58:24.243794       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4d8qt\" at 100.64.193.85:80/TCP\nI1006 23:58:24.243980       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sm4rn\" at 100.68.19.90:80/TCP\nI1006 23:58:24.244156       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-mjtlp\" at 100.71.8.204:80/TCP\nI1006 23:58:24.244285       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4sq28\" at 100.71.187.56:80/TCP\nI1006 23:58:24.244422       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-nzkkj\" at 100.68.123.63:80/TCP\nI1006 23:58:24.244551       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9z82n\" at 100.68.206.18:80/TCP\nI1006 23:58:24.247152       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9znmp\" at 100.71.146.171:80/TCP\nI1006 23:58:24.247350       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-hfbbl\" at 100.67.81.36:80/TCP\nI1006 23:58:24.247486       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8cjfh\" at 100.71.51.213:80/TCP\nI1006 23:58:24.247623       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sg7mj\" at 100.66.198.168:80/TCP\nI1006 23:58:24.247752       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-trffr\" at 100.69.231.20:80/TCP\nI1006 23:58:24.247877       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-25pc5\" at 100.65.58.136:80/TCP\nI1006 23:58:24.248007       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-t879p\" at 100.68.21.187:80/TCP\nI1006 23:58:24.248144       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8tzdd\" at 100.70.96.11:80/TCP\nI1006 23:58:24.248276       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-cwhx4\" at 100.66.152.174:80/TCP\nI1006 23:58:24.248426       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-g9zgv\" at 100.71.202.47:80/TCP\nI1006 23:58:24.248567       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8rhqc\" at 100.66.61.201:80/TCP\nI1006 23:58:24.248697       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-65x5t\" at 100.70.46.38:80/TCP\nI1006 23:58:24.248830       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-p6hmv\" at 100.65.127.56:80/TCP\nI1006 23:58:24.248978       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-fq2rs\" at 100.68.231.131:80/TCP\nI1006 23:58:24.249108       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9mctz\" at 100.66.58.69:80/TCP\nI1006 23:58:24.249248       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-dsrz7\" at 100.69.123.66:80/TCP\nI1006 23:58:24.249528       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4khm5\" at 100.71.89.22:80/TCP\nI1006 23:58:24.249704       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9wr74\" at 100.67.234.202:80/TCP\nI1006 23:58:24.249900       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xs5sp\" at 100.68.117.227:80/TCP\nI1006 23:58:24.250519       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-r76jp\" at 100.68.15.15:80/TCP\nI1006 23:58:24.252109       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-c5kdc\" at 100.66.170.89:80/TCP\nI1006 23:58:24.252141       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rbqdn\" at 100.66.38.196:80/TCP\nI1006 23:58:24.252156       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-x96jb\" at 100.66.26.12:80/TCP\nI1006 23:58:24.252366       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-lmhqf\" at 100.64.30.102:80/TCP\nI1006 23:58:24.252457       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ljwk7\" at 100.68.127.175:80/TCP\nI1006 23:58:24.252478       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-snxws\" at 100.68.162.55:80/TCP\nI1006 23:58:24.252490       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-svvp8\" at 100.64.26.238:80/TCP\nI1006 23:58:24.252500       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xh9p9\" at 100.71.68.187:80/TCP\nI1006 23:58:24.252511       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6tq7k\" at 100.68.95.32:80/TCP\nI1006 23:58:24.252528       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2vmx5\" at 100.69.216.141:80/TCP\nI1006 23:58:24.252538       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-kmgds\" at 100.70.146.228:80/TCP\nI1006 23:58:24.252553       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8t9bn\" at 100.68.253.246:80/TCP\nI1006 23:58:24.252680       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-f5vlj\" at 100.69.91.248:80/TCP\nI1006 23:58:24.252708       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ntzqt\" at 100.66.49.214:80/TCP\nI1006 23:58:24.252721       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-kqcz4\" at 100.70.82.247:80/TCP\nI1006 23:58:24.252738       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rvbxj\" at 100.66.47.186:80/TCP\nI1006 23:58:24.252862       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-t7d5t\" at 100.66.197.82:80/TCP\nI1006 23:58:24.252889       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xpdff\" at 100.67.130.62:80/TCP\nI1006 23:58:24.252922       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-qs9qp\" at 100.64.177.126:80/TCP\nI1006 23:58:24.252934       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6f9l6\" at 100.67.136.118:80/TCP\nI1006 23:58:24.253100       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ph7hq\" at 100.67.185.76:80/TCP\nI1006 23:58:24.253760       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:24.257141       1 service.go:301] Service svc-latency-2393/latency-svc-ghnmt updated: 1 ports\nI1006 23:58:24.273547       1 service.go:301] Service svc-latency-2393/latency-svc-9vlxv updated: 1 ports\nI1006 23:58:24.284844       1 service.go:301] Service svc-latency-2393/latency-svc-5z6dn updated: 1 ports\nI1006 23:58:24.307319       1 service.go:301] Service svc-latency-2393/latency-svc-kt9c2 updated: 1 ports\nI1006 23:58:24.314457       1 service.go:301] Service svc-latency-2393/latency-svc-sqjb2 updated: 1 ports\nI1006 23:58:24.341753       1 service.go:301] Service svc-latency-2393/latency-svc-pkdj4 updated: 1 ports\nI1006 23:58:24.366116       1 service.go:301] Service svc-latency-2393/latency-svc-2v4f6 updated: 1 ports\nI1006 23:58:24.379181       1 service.go:301] Service svc-latency-2393/latency-svc-q5pc8 updated: 1 ports\nI1006 23:58:24.402803       1 service.go:301] Service svc-latency-2393/latency-svc-hbrxx updated: 1 ports\nI1006 23:58:24.410687       1 service.go:301] Service svc-latency-2393/latency-svc-4rntd updated: 1 ports\nI1006 23:58:24.428751       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"185.83505ms\"\nI1006 23:58:24.449275       1 service.go:301] Service svc-latency-2393/latency-svc-n4pq2 updated: 1 ports\nI1006 23:58:24.483250       1 service.go:301] Service svc-latency-2393/latency-svc-vrx8c updated: 1 ports\nI1006 23:58:24.523071       1 service.go:301] Service svc-latency-2393/latency-svc-72blr updated: 1 ports\nI1006 23:58:24.572831       1 service.go:301] Service svc-latency-2393/latency-svc-qsxlg updated: 1 ports\nI1006 23:58:24.622332       1 service.go:301] Service svc-latency-2393/latency-svc-2ht6w updated: 1 ports\nI1006 23:58:24.693183       1 service.go:301] Service svc-latency-2393/latency-svc-d7x2s updated: 1 ports\nI1006 23:58:24.737318       1 service.go:301] Service svc-latency-2393/latency-svc-dd9dt updated: 1 ports\nI1006 23:58:24.802032       1 service.go:301] Service svc-latency-2393/latency-svc-pws45 updated: 1 ports\nI1006 23:58:24.880110       1 service.go:301] Service svc-latency-2393/latency-svc-mqgrv updated: 1 ports\nI1006 23:58:24.897426       1 service.go:301] Service svc-latency-2393/latency-svc-gk844 updated: 1 ports\nI1006 23:58:24.924732       1 service.go:301] Service svc-latency-2393/latency-svc-q6rjz updated: 1 ports\nI1006 23:58:24.988638       1 service.go:301] Service svc-latency-2393/latency-svc-m5g7n updated: 1 ports\nI1006 23:58:25.042123       1 service.go:301] Service svc-latency-2393/latency-svc-2f6b8 updated: 1 ports\nI1006 23:58:25.073003       1 service.go:301] Service svc-latency-2393/latency-svc-qrk2d updated: 1 ports\nI1006 23:58:25.122725       1 service.go:301] Service svc-latency-2393/latency-svc-nr2rl updated: 1 ports\nI1006 23:58:25.172050       1 service.go:301] Service svc-latency-2393/latency-svc-vgb5r updated: 1 ports\nI1006 23:58:25.221835       1 service.go:301] Service svc-latency-2393/latency-svc-7qttt updated: 1 ports\nI1006 23:58:25.242699       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sqjb2\" at 100.67.195.246:80/TCP\nI1006 23:58:25.242786       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-nr2rl\" at 100.64.255.64:80/TCP\nI1006 23:58:25.242822       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-kt9c2\" at 100.70.181.171:80/TCP\nI1006 23:58:25.242892       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-n4pq2\" at 100.68.131.119:80/TCP\nI1006 23:58:25.242912       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2ht6w\" at 100.68.47.173:80/TCP\nI1006 23:58:25.242922       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-q6rjz\" at 100.70.216.63:80/TCP\nI1006 23:58:25.242933       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-qrk2d\" at 100.71.28.238:80/TCP\nI1006 23:58:25.242962       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-hbrxx\" at 100.70.187.231:80/TCP\nI1006 23:58:25.242974       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4rntd\" at 100.67.64.32:80/TCP\nI1006 23:58:25.243067       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-d7x2s\" at 100.67.31.208:80/TCP\nI1006 23:58:25.243117       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-pws45\" at 100.69.250.178:80/TCP\nI1006 23:58:25.243135       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-gk844\" at 100.68.167.184:80/TCP\nI1006 23:58:25.243146       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9vlxv\" at 100.69.156.141:80/TCP\nI1006 23:58:25.243158       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-m5g7n\" at 100.68.50.188:80/TCP\nI1006 23:58:25.243169       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-vgb5r\" at 100.68.107.120:80/TCP\nI1006 23:58:25.243306       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-7qttt\" at 100.65.61.100:80/TCP\nI1006 23:58:25.243367       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ghnmt\" at 100.64.17.77:80/TCP\nI1006 23:58:25.243383       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-q5pc8\" at 100.66.29.144:80/TCP\nI1006 23:58:25.243394       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-72blr\" at 100.67.87.72:80/TCP\nI1006 23:58:25.243405       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-dd9dt\" at 100.68.63.150:80/TCP\nI1006 23:58:25.243477       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2f6b8\" at 100.71.97.214:80/TCP\nI1006 23:58:25.243509       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-qsxlg\" at 100.70.200.109:80/TCP\nI1006 23:58:25.243551       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-mqgrv\" at 100.65.155.73:80/TCP\nI1006 23:58:25.243563       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-pkdj4\" at 100.69.31.80:80/TCP\nI1006 23:58:25.243616       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2v4f6\" at 100.67.48.188:80/TCP\nI1006 23:58:25.243648       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-vrx8c\" at 100.70.190.69:80/TCP\nI1006 23:58:25.243661       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-5z6dn\" at 100.66.6.26:80/TCP\nI1006 23:58:25.244142       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:25.277607       1 service.go:301] Service svc-latency-2393/latency-svc-r5wxn updated: 1 ports\nI1006 23:58:25.326427       1 service.go:301] Service svc-latency-2393/latency-svc-kgc4n updated: 1 ports\nI1006 23:58:25.329318       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"86.640915ms\"\nI1006 23:58:25.371244       1 service.go:301] Service svc-latency-2393/latency-svc-s7pjz updated: 1 ports\nI1006 23:58:25.420102       1 service.go:301] Service svc-latency-2393/latency-svc-p5gsf updated: 1 ports\nI1006 23:58:25.478283       1 service.go:301] Service svc-latency-2393/latency-svc-ptrkr updated: 1 ports\nI1006 23:58:25.524035       1 service.go:301] Service svc-latency-2393/latency-svc-ftpls updated: 1 ports\nI1006 23:58:25.577460       1 service.go:301] Service svc-latency-2393/latency-svc-rvg8x updated: 1 ports\nI1006 23:58:25.625967       1 service.go:301] Service svc-latency-2393/latency-svc-j7zmj updated: 1 ports\nI1006 23:58:25.679031       1 service.go:301] Service svc-latency-2393/latency-svc-cl57v updated: 1 ports\nI1006 23:58:25.724448       1 service.go:301] Service svc-latency-2393/latency-svc-lkcg4 updated: 1 ports\nI1006 23:58:25.779921       1 service.go:301] Service svc-latency-2393/latency-svc-lz54r updated: 1 ports\nI1006 23:58:25.825779       1 service.go:301] Service svc-latency-2393/latency-svc-k4jsf updated: 1 ports\nI1006 23:58:25.874994       1 service.go:301] Service svc-latency-2393/latency-svc-x7857 updated: 1 ports\nI1006 23:58:25.928053       1 service.go:301] Service svc-latency-2393/latency-svc-wwvrh updated: 1 ports\nI1006 23:58:25.989116       1 service.go:301] Service svc-latency-2393/latency-svc-wm7kf updated: 1 ports\nI1006 23:58:26.029783       1 service.go:301] Service svc-latency-2393/latency-svc-xqd76 updated: 1 ports\nI1006 23:58:26.074448       1 service.go:301] Service svc-latency-2393/latency-svc-zr7lv updated: 1 ports\nI1006 23:58:26.136032       1 service.go:301] Service svc-latency-2393/latency-svc-2zbkg updated: 1 ports\nI1006 23:58:26.186978       1 service.go:301] Service svc-latency-2393/latency-svc-2s2mf updated: 1 ports\nI1006 23:58:26.224336       1 service.go:301] Service svc-latency-2393/latency-svc-v8qq2 updated: 1 ports\nI1006 23:58:26.241904       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-j7zmj\" at 100.67.237.129:80/TCP\nI1006 23:58:26.242275       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-lz54r\" at 100.65.192.95:80/TCP\nI1006 23:58:26.242477       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-wwvrh\" at 100.66.193.177:80/TCP\nI1006 23:58:26.242821       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-zr7lv\" at 100.66.102.97:80/TCP\nI1006 23:58:26.243086       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2s2mf\" at 100.71.104.99:80/TCP\nI1006 23:58:26.243302       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ptrkr\" at 100.64.203.215:80/TCP\nI1006 23:58:26.243493       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ftpls\" at 100.64.143.25:80/TCP\nI1006 23:58:26.243660       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rvg8x\" at 100.69.125.155:80/TCP\nI1006 23:58:26.243915       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-cl57v\" at 100.69.213.69:80/TCP\nI1006 23:58:26.244248       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-k4jsf\" at 100.71.43.3:80/TCP\nI1006 23:58:26.244382       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xqd76\" at 100.70.45.202:80/TCP\nI1006 23:58:26.244400       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-lkcg4\" at 100.68.199.120:80/TCP\nI1006 23:58:26.244412       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-x7857\" at 100.71.166.4:80/TCP\nI1006 23:58:26.244534       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-wm7kf\" at 100.65.82.104:80/TCP\nI1006 23:58:26.244546       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-p5gsf\" at 100.66.165.114:80/TCP\nI1006 23:58:26.244558       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2zbkg\" at 100.70.188.23:80/TCP\nI1006 23:58:26.244582       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-v8qq2\" at 100.64.27.194:80/TCP\nI1006 23:58:26.244684       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-r5wxn\" at 100.64.94.70:80/TCP\nI1006 23:58:26.244716       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-kgc4n\" at 100.71.254.77:80/TCP\nI1006 23:58:26.244733       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-s7pjz\" at 100.67.55.36:80/TCP\nI1006 23:58:26.245158       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:26.283680       1 service.go:301] Service svc-latency-2393/latency-svc-jmqfb updated: 1 ports\nI1006 23:58:26.359395       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"117.49448ms\"\nI1006 23:58:26.409496       1 service.go:301] Service svc-latency-2393/latency-svc-z6dpv updated: 1 ports\nI1006 23:58:26.433905       1 service.go:301] Service svc-latency-2393/latency-svc-8p67f updated: 1 ports\nI1006 23:58:26.457720       1 service.go:301] Service svc-latency-2393/latency-svc-bfk2f updated: 1 ports\nI1006 23:58:26.495941       1 service.go:301] Service svc-latency-2393/latency-svc-zgz8f updated: 1 ports\nI1006 23:58:26.528657       1 service.go:301] Service svc-latency-2393/latency-svc-z29ch updated: 1 ports\nI1006 23:58:26.591100       1 service.go:301] Service svc-latency-2393/latency-svc-fmp5f updated: 1 ports\nI1006 23:58:26.640356       1 service.go:301] Service svc-latency-2393/latency-svc-n7xzt updated: 1 ports\nI1006 23:58:26.682418       1 service.go:301] Service svc-latency-2393/latency-svc-86bkf updated: 1 ports\nI1006 23:58:26.772754       1 service.go:301] Service svc-latency-2393/latency-svc-fztg4 updated: 1 ports\nI1006 23:58:26.824161       1 service.go:301] Service svc-latency-2393/latency-svc-mpn8x updated: 1 ports\nI1006 23:58:26.872057       1 service.go:301] Service svc-latency-2393/latency-svc-l2rmx updated: 1 ports\nI1006 23:58:26.919084       1 service.go:301] Service svc-latency-2393/latency-svc-nb9qf updated: 1 ports\nI1006 23:58:27.001737       1 service.go:301] Service svc-latency-2393/latency-svc-78249 updated: 1 ports\nI1006 23:58:27.033185       1 service.go:301] Service svc-latency-2393/latency-svc-kwdlc updated: 1 ports\nI1006 23:58:27.088797       1 service.go:301] Service svc-latency-2393/latency-svc-8lxw5 updated: 1 ports\nI1006 23:58:27.139437       1 service.go:301] Service svc-latency-2393/latency-svc-xdldk updated: 1 ports\nI1006 23:58:27.186371       1 service.go:301] Service svc-latency-2393/latency-svc-88vj8 updated: 1 ports\nI1006 23:58:27.224119       1 service.go:301] Service svc-latency-2393/latency-svc-95dd8 updated: 1 ports\nI1006 23:58:27.240133       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-zgz8f\" at 100.68.82.254:80/TCP\nI1006 23:58:27.240185       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-fztg4\" at 100.65.97.11:80/TCP\nI1006 23:58:27.240272       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-kwdlc\" at 100.70.53.155:80/TCP\nI1006 23:58:27.240328       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8lxw5\" at 100.66.40.198:80/TCP\nI1006 23:58:27.240346       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8p67f\" at 100.69.45.24:80/TCP\nI1006 23:58:27.240358       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-bfk2f\" at 100.71.23.207:80/TCP\nI1006 23:58:27.240415       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-z29ch\" at 100.70.59.18:80/TCP\nI1006 23:58:27.240433       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-86bkf\" at 100.66.101.146:80/TCP\nI1006 23:58:27.240445       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-nb9qf\" at 100.69.148.228:80/TCP\nI1006 23:58:27.240502       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-95dd8\" at 100.69.59.29:80/TCP\nI1006 23:58:27.240516       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-n7xzt\" at 100.69.231.36:80/TCP\nI1006 23:58:27.240528       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-mpn8x\" at 100.69.174.76:80/TCP\nI1006 23:58:27.240590       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-l2rmx\" at 100.64.20.187:80/TCP\nI1006 23:58:27.240619       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-78249\" at 100.68.197.231:80/TCP\nI1006 23:58:27.240631       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xdldk\" at 100.70.121.146:80/TCP\nI1006 23:58:27.240700       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-88vj8\" at 100.66.231.169:80/TCP\nI1006 23:58:27.240714       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-jmqfb\" at 100.68.100.20:80/TCP\nI1006 23:58:27.240725       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-z6dpv\" at 100.68.119.16:80/TCP\nI1006 23:58:27.240787       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-fmp5f\" at 100.67.242.254:80/TCP\nI1006 23:58:27.241206       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:27.270772       1 service.go:301] Service svc-latency-2393/latency-svc-zgfcn updated: 1 ports\nI1006 23:58:27.342065       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"101.933813ms\"\nI1006 23:58:27.350351       1 service.go:301] Service svc-latency-2393/latency-svc-zjs27 updated: 1 ports\nI1006 23:58:27.390976       1 service.go:301] Service svc-latency-2393/latency-svc-c8968 updated: 1 ports\nI1006 23:58:27.428486       1 service.go:301] Service svc-latency-2393/latency-svc-d4nf7 updated: 1 ports\nI1006 23:58:27.477357       1 service.go:301] Service svc-latency-2393/latency-svc-ggr9f updated: 1 ports\nI1006 23:58:27.524245       1 service.go:301] Service svc-latency-2393/latency-svc-ldq82 updated: 1 ports\nI1006 23:58:27.585629       1 service.go:301] Service svc-latency-2393/latency-svc-rq2f4 updated: 1 ports\nI1006 23:58:27.637696       1 service.go:301] Service svc-latency-2393/latency-svc-4mcw7 updated: 1 ports\nI1006 23:58:27.685209       1 service.go:301] Service svc-latency-2393/latency-svc-rbgp8 updated: 1 ports\nI1006 23:58:27.735530       1 service.go:301] Service svc-latency-2393/latency-svc-gv4lt updated: 1 ports\nI1006 23:58:27.787986       1 service.go:301] Service svc-latency-2393/latency-svc-ncgrr updated: 1 ports\nI1006 23:58:27.825639       1 service.go:301] Service svc-latency-2393/latency-svc-bgk5c updated: 1 ports\nI1006 23:58:27.960904       1 service.go:301] Service svc-latency-2393/latency-svc-2mfr7 updated: 1 ports\nI1006 23:58:28.004398       1 service.go:301] Service svc-latency-2393/latency-svc-45s85 updated: 1 ports\nI1006 23:58:28.043756       1 service.go:301] Service svc-latency-2393/latency-svc-q84bn updated: 1 ports\nI1006 23:58:28.089163       1 service.go:301] Service svc-latency-2393/latency-svc-895d9 updated: 1 ports\nI1006 23:58:28.107863       1 service.go:301] Service svc-latency-2393/latency-svc-6s79h updated: 1 ports\nI1006 23:58:28.149420       1 service.go:301] Service svc-latency-2393/latency-svc-8bv9v updated: 1 ports\nI1006 23:58:28.226232       1 service.go:301] Service svc-latency-2393/latency-svc-xjw58 updated: 1 ports\nI1006 23:58:28.240240       1 service.go:301] Service svc-latency-2393/latency-svc-f9f5k updated: 1 ports\nI1006 23:58:28.240363       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-895d9\" at 100.71.23.121:80/TCP\nI1006 23:58:28.240389       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8bv9v\" at 100.65.151.88:80/TCP\nI1006 23:58:28.240440       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-zjs27\" at 100.66.206.217:80/TCP\nI1006 23:58:28.240452       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ggr9f\" at 100.71.215.40:80/TCP\nI1006 23:58:28.240462       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rq2f4\" at 100.70.254.79:80/TCP\nI1006 23:58:28.240473       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rbgp8\" at 100.64.179.229:80/TCP\nI1006 23:58:28.240518       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ncgrr\" at 100.65.110.85:80/TCP\nI1006 23:58:28.240532       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2mfr7\" at 100.69.22.93:80/TCP\nI1006 23:58:28.240543       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-f9f5k\" at 100.64.83.106:80/TCP\nI1006 23:58:28.240607       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-zgfcn\" at 100.68.94.191:80/TCP\nI1006 23:58:28.240622       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-c8968\" at 100.70.149.11:80/TCP\nI1006 23:58:28.240634       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4mcw7\" at 100.66.140.128:80/TCP\nI1006 23:58:28.240645       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ldq82\" at 100.64.140.226:80/TCP\nI1006 23:58:28.240692       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-q84bn\" at 100.71.251.175:80/TCP\nI1006 23:58:28.240726       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xjw58\" at 100.69.219.160:80/TCP\nI1006 23:58:28.240737       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-d4nf7\" at 100.70.119.195:80/TCP\nI1006 23:58:28.240748       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-gv4lt\" at 100.65.148.7:80/TCP\nI1006 23:58:28.240760       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-bgk5c\" at 100.65.213.207:80/TCP\nI1006 23:58:28.240771       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-45s85\" at 100.67.27.125:80/TCP\nI1006 23:58:28.240805       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6s79h\" at 100.71.11.71:80/TCP\nI1006 23:58:28.243127       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:28.308447       1 service.go:301] Service svc-latency-2393/latency-svc-r57pf updated: 1 ports\nI1006 23:58:28.407174       1 service.go:301] Service svc-latency-2393/latency-svc-9jfgq updated: 1 ports\nI1006 23:58:28.420498       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"180.123263ms\"\nI1006 23:58:28.426790       1 service.go:301] Service svc-latency-2393/latency-svc-84jwb updated: 1 ports\nI1006 23:58:28.445705       1 service.go:301] Service svc-latency-2393/latency-svc-thr85 updated: 1 ports\nI1006 23:58:28.468435       1 service.go:301] Service svc-latency-2393/latency-svc-644q4 updated: 1 ports\nI1006 23:58:28.532088       1 service.go:301] Service svc-latency-2393/latency-svc-vmbwr updated: 1 ports\nI1006 23:58:28.575736       1 service.go:301] Service svc-latency-2393/latency-svc-z4dpb updated: 1 ports\nI1006 23:58:28.622565       1 service.go:301] Service svc-latency-2393/latency-svc-sg2w2 updated: 1 ports\nI1006 23:58:28.670200       1 service.go:301] Service svc-latency-2393/latency-svc-f8mr2 updated: 1 ports\nI1006 23:58:28.722961       1 service.go:301] Service svc-latency-2393/latency-svc-b47ct updated: 1 ports\nI1006 23:58:28.779119       1 service.go:301] Service svc-latency-2393/latency-svc-rmhph updated: 1 ports\nI1006 23:58:28.829258       1 service.go:301] Service svc-latency-2393/latency-svc-bqrrs updated: 1 ports\nI1006 23:58:28.870407       1 service.go:301] Service svc-latency-2393/latency-svc-ccshc updated: 1 ports\nI1006 23:58:28.961156       1 service.go:301] Service services-2188/clusterip-service updated: 1 ports\nI1006 23:58:28.978988       1 service.go:301] Service svc-latency-2393/latency-svc-ldldr updated: 1 ports\nI1006 23:58:28.995763       1 service.go:301] Service services-2188/externalsvc updated: 1 ports\nI1006 23:58:29.049630       1 service.go:301] Service svc-latency-2393/latency-svc-jbm7q updated: 1 ports\nI1006 23:58:29.090872       1 service.go:301] Service svc-latency-2393/latency-svc-b8cp5 updated: 1 ports\nI1006 23:58:29.139464       1 service.go:301] Service svc-latency-2393/latency-svc-6wvqq updated: 1 ports\nI1006 23:58:29.175593       1 service.go:301] Service svc-latency-2393/latency-svc-cwcq2 updated: 1 ports\nI1006 23:58:29.222678       1 service.go:301] Service svc-latency-2393/latency-svc-c9bqk updated: 1 ports\nI1006 23:58:29.239461       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-vmbwr\" at 100.64.176.200:80/TCP\nI1006 23:58:29.239791       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-z4dpb\" at 100.66.180.253:80/TCP\nI1006 23:58:29.239942       1 service.go:416] Adding new service port \"services-2188/externalsvc\" at 100.67.15.109:80/TCP\nI1006 23:58:29.240071       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-b8cp5\" at 100.66.52.53:80/TCP\nI1006 23:58:29.240189       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-cwcq2\" at 100.66.28.135:80/TCP\nI1006 23:58:29.240334       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-r57pf\" at 100.65.49.96:80/TCP\nI1006 23:58:29.240443       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-84jwb\" at 100.69.245.177:80/TCP\nI1006 23:58:29.240568       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-644q4\" at 100.66.50.239:80/TCP\nI1006 23:58:29.240685       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ccshc\" at 100.64.239.104:80/TCP\nI1006 23:58:29.240705       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-jbm7q\" at 100.71.133.0:80/TCP\nI1006 23:58:29.240717       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6wvqq\" at 100.66.42.185:80/TCP\nI1006 23:58:29.240729       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9jfgq\" at 100.65.63.163:80/TCP\nI1006 23:58:29.240769       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sg2w2\" at 100.66.99.238:80/TCP\nI1006 23:58:29.240783       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-b47ct\" at 100.66.141.67:80/TCP\nI1006 23:58:29.240802       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-bqrrs\" at 100.66.222.25:80/TCP\nI1006 23:58:29.240838       1 service.go:416] Adding new service port \"services-2188/clusterip-service\" at 100.65.82.211:80/TCP\nI1006 23:58:29.240854       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ldldr\" at 100.70.145.50:80/TCP\nI1006 23:58:29.240874       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-c9bqk\" at 100.64.93.212:80/TCP\nI1006 23:58:29.240892       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-thr85\" at 100.70.212.241:80/TCP\nI1006 23:58:29.240928       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-f8mr2\" at 100.70.95.125:80/TCP\nI1006 23:58:29.240946       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rmhph\" at 100.66.202.40:80/TCP\nI1006 23:58:29.241342       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:29.278966       1 service.go:301] Service svc-latency-2393/latency-svc-jw9bd updated: 1 ports\nI1006 23:58:29.323097       1 service.go:301] Service svc-latency-2393/latency-svc-q8npc updated: 1 ports\nI1006 23:58:29.344940       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"105.480284ms\"\nI1006 23:58:29.374153       1 service.go:301] Service svc-latency-2393/latency-svc-vxmjf updated: 1 ports\nI1006 23:58:29.422594       1 service.go:301] Service svc-latency-2393/latency-svc-gnp8t updated: 1 ports\nI1006 23:58:29.469179       1 service.go:301] Service svc-latency-2393/latency-svc-57dmk updated: 1 ports\nI1006 23:58:29.522815       1 service.go:301] Service svc-latency-2393/latency-svc-849zs updated: 1 ports\nI1006 23:58:29.577461       1 service.go:301] Service svc-latency-2393/latency-svc-q7xnp updated: 1 ports\nI1006 23:58:29.626292       1 service.go:301] Service svc-latency-2393/latency-svc-dhlgm updated: 1 ports\nI1006 23:58:29.669229       1 service.go:301] Service svc-latency-2393/latency-svc-6n5j5 updated: 1 ports\nI1006 23:58:29.771567       1 service.go:301] Service svc-latency-2393/latency-svc-xg4ff updated: 1 ports\nI1006 23:58:29.875378       1 service.go:301] Service svc-latency-2393/latency-svc-sxl4j updated: 1 ports\nI1006 23:58:29.920568       1 service.go:301] Service svc-latency-2393/latency-svc-7f76x updated: 1 ports\nI1006 23:58:29.981292       1 service.go:301] Service svc-latency-2393/latency-svc-bjp7n updated: 1 ports\nI1006 23:58:30.029099       1 service.go:301] Service svc-latency-2393/latency-svc-2j9h2 updated: 1 ports\nI1006 23:58:30.071545       1 service.go:301] Service svc-latency-2393/latency-svc-6fzt9 updated: 1 ports\nI1006 23:58:30.127057       1 service.go:301] Service svc-latency-2393/latency-svc-sx9kt updated: 1 ports\nI1006 23:58:30.174346       1 service.go:301] Service svc-latency-2393/latency-svc-h467x updated: 1 ports\nI1006 23:58:30.223118       1 service.go:301] Service svc-latency-2393/latency-svc-brlrb updated: 1 ports\nI1006 23:58:30.240627       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sx9kt\" at 100.69.115.71:80/TCP\nI1006 23:58:30.240900       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-q7xnp\" at 100.69.90.229:80/TCP\nI1006 23:58:30.241055       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-dhlgm\" at 100.66.223.244:80/TCP\nI1006 23:58:30.241172       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sxl4j\" at 100.70.57.250:80/TCP\nI1006 23:58:30.241283       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2j9h2\" at 100.65.255.116:80/TCP\nI1006 23:58:30.241390       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-jw9bd\" at 100.70.191.80:80/TCP\nI1006 23:58:30.241499       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-q8npc\" at 100.66.185.46:80/TCP\nI1006 23:58:30.241603       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-bjp7n\" at 100.66.233.172:80/TCP\nI1006 23:58:30.241709       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-7f76x\" at 100.68.108.104:80/TCP\nI1006 23:58:30.241827       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6fzt9\" at 100.64.165.179:80/TCP\nI1006 23:58:30.241934       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-h467x\" at 100.68.224.66:80/TCP\nI1006 23:58:30.242045       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-vxmjf\" at 100.65.109.41:80/TCP\nI1006 23:58:30.242174       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-gnp8t\" at 100.71.27.244:80/TCP\nI1006 23:58:30.242280       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-57dmk\" at 100.71.120.19:80/TCP\nI1006 23:58:30.242382       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6n5j5\" at 100.64.185.248:80/TCP\nI1006 23:58:30.242507       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-849zs\" at 100.68.240.211:80/TCP\nI1006 23:58:30.242616       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xg4ff\" at 100.70.68.200:80/TCP\nI1006 23:58:30.242724       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-brlrb\" at 100.64.106.114:80/TCP\nI1006 23:58:30.243261       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:30.283566       1 service.go:301] Service svc-latency-2393/latency-svc-rtrd7 updated: 1 ports\nI1006 23:58:30.343386       1 service.go:301] Service svc-latency-2393/latency-svc-5xvq5 updated: 1 ports\nI1006 23:58:30.399166       1 service.go:301] Service svc-latency-2393/latency-svc-t5tpd updated: 1 ports\nI1006 23:58:30.417139       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"176.516733ms\"\nI1006 23:58:30.429085       1 service.go:301] Service svc-latency-2393/latency-svc-tzfpz updated: 1 ports\nI1006 23:58:30.476139       1 service.go:301] Service svc-latency-2393/latency-svc-rktpx updated: 1 ports\nI1006 23:58:30.529029       1 service.go:301] Service svc-latency-2393/latency-svc-vsss7 updated: 1 ports\nI1006 23:58:30.579146       1 service.go:301] Service svc-latency-2393/latency-svc-4wc8j updated: 1 ports\nI1006 23:58:30.626426       1 service.go:301] Service svc-latency-2393/latency-svc-tp5sl updated: 1 ports\nI1006 23:58:30.672525       1 service.go:301] Service svc-latency-2393/latency-svc-fvxws updated: 1 ports\nI1006 23:58:30.722673       1 service.go:301] Service svc-latency-2393/latency-svc-2zvvp updated: 1 ports\nI1006 23:58:30.773670       1 service.go:301] Service svc-latency-2393/latency-svc-ml7jn updated: 1 ports\nI1006 23:58:30.821929       1 service.go:301] Service services-7315/service-headless-toggled updated: 1 ports\nI1006 23:58:30.840601       1 service.go:301] Service svc-latency-2393/latency-svc-4qvp2 updated: 1 ports\nI1006 23:58:30.889633       1 service.go:301] Service svc-latency-2393/latency-svc-zkx4z updated: 1 ports\nI1006 23:58:30.928371       1 service.go:301] Service svc-latency-2393/latency-svc-nkxnz updated: 1 ports\nI1006 23:58:30.983524       1 service.go:301] Service svc-latency-2393/latency-svc-5jh9j updated: 1 ports\nI1006 23:58:31.023218       1 service.go:301] Service svc-latency-2393/latency-svc-hr77g updated: 1 ports\nI1006 23:58:31.069021       1 service.go:301] Service svc-latency-2393/latency-svc-xspnn updated: 1 ports\nI1006 23:58:31.132194       1 service.go:301] Service svc-latency-2393/latency-svc-6vb2d updated: 1 ports\nI1006 23:58:31.187054       1 service.go:301] Service svc-latency-2393/latency-svc-r5l7x updated: 1 ports\nI1006 23:58:31.226371       1 service.go:301] Service svc-latency-2393/latency-svc-n7qcb updated: 1 ports\nI1006 23:58:31.242775       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6vb2d\" at 100.66.97.1:80/TCP\nI1006 23:58:31.243699       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-vsss7\" at 100.70.106.99:80/TCP\nI1006 23:58:31.243883       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2zvvp\" at 100.68.86.224:80/TCP\nI1006 23:58:31.244024       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-zkx4z\" at 100.66.237.36:80/TCP\nI1006 23:58:31.244161       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-nkxnz\" at 100.65.118.224:80/TCP\nI1006 23:58:31.244299       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-t5tpd\" at 100.66.214.61:80/TCP\nI1006 23:58:31.244404       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-tp5sl\" at 100.68.190.12:80/TCP\nI1006 23:58:31.244524       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-hr77g\" at 100.65.96.247:80/TCP\nI1006 23:58:31.244636       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-5jh9j\" at 100.66.207.96:80/TCP\nI1006 23:58:31.244940       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xspnn\" at 100.71.153.244:80/TCP\nI1006 23:58:31.245054       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-r5l7x\" at 100.64.95.212:80/TCP\nI1006 23:58:31.245166       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4wc8j\" at 100.67.200.104:80/TCP\nI1006 23:58:31.245278       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-fvxws\" at 100.69.167.182:80/TCP\nI1006 23:58:31.245389       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ml7jn\" at 100.68.72.109:80/TCP\nI1006 23:58:31.245511       1 service.go:416] Adding new service port \"services-7315/service-headless-toggled\" at 100.66.130.182:80/TCP\nI1006 23:58:31.245641       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4qvp2\" at 100.68.162.48:80/TCP\nI1006 23:58:31.245761       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-n7qcb\" at 100.68.177.187:80/TCP\nI1006 23:58:31.245875       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rtrd7\" at 100.67.83.90:80/TCP\nI1006 23:58:31.245987       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-5xvq5\" at 100.66.217.93:80/TCP\nI1006 23:58:31.246220       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-tzfpz\" at 100.65.5.244:80/TCP\nI1006 23:58:31.246333       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rktpx\" at 100.68.28.108:80/TCP\nI1006 23:58:31.246831       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:31.294031       1 service.go:301] Service svc-latency-2393/latency-svc-t7dgc updated: 1 ports\nI1006 23:58:31.340038       1 service.go:301] Service svc-latency-2393/latency-svc-ck88l updated: 1 ports\nI1006 23:58:31.402917       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"160.142256ms\"\nI1006 23:58:32.406970       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-t7dgc\" at 100.64.148.141:80/TCP\nI1006 23:58:32.407027       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ck88l\" at 100.64.209.223:80/TCP\nI1006 23:58:32.407595       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:32.567067       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"160.141127ms\"\nI1006 23:58:33.942375       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:34.138562       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"193.523989ms\"\nI1006 23:58:34.532659       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:34.696591       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"164.346754ms\"\nI1006 23:58:34.716888       1 service.go:301] Service services-6745/service-proxy-toggled updated: 0 ports\nI1006 23:58:35.697034       1 service.go:441] Removing service port \"services-6745/service-proxy-toggled\"\nI1006 23:58:35.699246       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:35.899526       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"202.498533ms\"\nI1006 23:58:36.900161       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:37.111972       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"212.135456ms\"\nI1006 23:58:37.266645       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:37.538018       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"271.730496ms\"\nI1006 23:58:38.207173       1 service.go:301] Service services-2188/clusterip-service updated: 0 ports\nI1006 23:58:38.264557       1 service.go:441] Removing service port \"services-2188/clusterip-service\"\nI1006 23:58:38.265982       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:38.469308       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"204.739557ms\"\nI1006 23:58:39.004889       1 service.go:301] Service svc-latency-2393/latency-svc-25pc5 updated: 0 ports\nI1006 23:58:39.019269       1 service.go:301] Service svc-latency-2393/latency-svc-2f6b8 updated: 0 ports\nI1006 23:58:39.035880       1 service.go:301] Service svc-latency-2393/latency-svc-2ht6w updated: 0 ports\nI1006 23:58:39.048090       1 service.go:301] Service svc-latency-2393/latency-svc-2j9h2 updated: 0 ports\nI1006 23:58:39.064955       1 service.go:301] Service svc-latency-2393/latency-svc-2mfr7 updated: 0 ports\nI1006 23:58:39.076807       1 service.go:301] Service svc-latency-2393/latency-svc-2s2mf updated: 0 ports\nI1006 23:58:39.094550       1 service.go:301] Service svc-latency-2393/latency-svc-2v4f6 updated: 0 ports\nI1006 23:58:39.106572       1 service.go:301] Service svc-latency-2393/latency-svc-2vmx5 updated: 0 ports\nI1006 23:58:39.122262       1 service.go:301] Service svc-latency-2393/latency-svc-2zbkg updated: 0 ports\nI1006 23:58:39.139048       1 service.go:301] Service svc-latency-2393/latency-svc-2zvvp updated: 0 ports\nI1006 23:58:39.152165       1 service.go:301] Service svc-latency-2393/latency-svc-45s85 updated: 0 ports\nI1006 23:58:39.167399       1 service.go:301] Service svc-latency-2393/latency-svc-4d8qt updated: 0 ports\nI1006 23:58:39.180885       1 service.go:301] Service svc-latency-2393/latency-svc-4khm5 updated: 0 ports\nI1006 23:58:39.204688       1 service.go:301] Service svc-latency-2393/latency-svc-4mcw7 updated: 0 ports\nI1006 23:58:39.303229       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2zbkg\"\nI1006 23:58:39.303402       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2j9h2\"\nI1006 23:58:39.303492       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2vmx5\"\nI1006 23:58:39.303571       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2zvvp\"\nI1006 23:58:39.303701       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4mcw7\"\nI1006 23:58:39.304163       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-25pc5\"\nI1006 23:58:39.304278       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-45s85\"\nI1006 23:58:39.304383       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2ht6w\"\nI1006 23:58:39.304464       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2mfr7\"\nI1006 23:58:39.304543       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2s2mf\"\nI1006 23:58:39.304630       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2v4f6\"\nI1006 23:58:39.304709       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4d8qt\"\nI1006 23:58:39.304786       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4khm5\"\nI1006 23:58:39.304962       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2f6b8\"\nI1006 23:58:39.306433       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:39.422706       1 service.go:301] Service svc-latency-2393/latency-svc-4qvp2 updated: 0 ports\nI1006 23:58:39.509175       1 service.go:301] Service svc-latency-2393/latency-svc-4rntd updated: 0 ports\nI1006 23:58:39.582467       1 service.go:301] Service svc-latency-2393/latency-svc-4sq28 updated: 0 ports\nI1006 23:58:39.615415       1 service.go:301] Service svc-latency-2393/latency-svc-4wc8j updated: 0 ports\nI1006 23:58:39.647900       1 service.go:301] Service svc-latency-2393/latency-svc-57dmk updated: 0 ports\nI1006 23:58:39.657392       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"354.152263ms\"\nI1006 23:58:39.658060       1 service.go:301] Service svc-latency-2393/latency-svc-5jh9j updated: 0 ports\nI1006 23:58:39.678701       1 service.go:301] Service svc-latency-2393/latency-svc-5xvq5 updated: 0 ports\nI1006 23:58:39.702085       1 service.go:301] Service svc-latency-2393/latency-svc-5z6dn updated: 0 ports\nI1006 23:58:39.722176       1 service.go:301] Service svc-latency-2393/latency-svc-644q4 updated: 0 ports\nI1006 23:58:39.740423       1 service.go:301] Service svc-latency-2393/latency-svc-65x5t updated: 0 ports\nI1006 23:58:39.771772       1 service.go:301] Service svc-latency-2393/latency-svc-6f9l6 updated: 0 ports\nI1006 23:58:39.795012       1 service.go:301] Service svc-latency-2393/latency-svc-6fzt9 updated: 0 ports\nI1006 23:58:39.807058       1 service.go:301] Service svc-latency-2393/latency-svc-6n5j5 updated: 0 ports\nI1006 23:58:39.825216       1 service.go:301] Service svc-latency-2393/latency-svc-6s79h updated: 0 ports\nI1006 23:58:39.842683       1 service.go:301] Service svc-latency-2393/latency-svc-6tq7k updated: 0 ports\nI1006 23:58:39.879487       1 service.go:301] Service svc-latency-2393/latency-svc-6vb2d updated: 0 ports\nI1006 23:58:39.898904       1 service.go:301] Service svc-latency-2393/latency-svc-6wvqq updated: 0 ports\nI1006 23:58:39.916078       1 service.go:301] Service svc-latency-2393/latency-svc-72blr updated: 0 ports\nI1006 23:58:39.924416       1 service.go:301] Service svc-latency-2393/latency-svc-78249 updated: 0 ports\nI1006 23:58:39.935758       1 service.go:301] Service svc-latency-2393/latency-svc-7f76x updated: 0 ports\nI1006 23:58:39.951260       1 service.go:301] Service svc-latency-2393/latency-svc-7qttt updated: 0 ports\nI1006 23:58:39.967639       1 service.go:301] Service svc-latency-2393/latency-svc-849zs updated: 0 ports\nI1006 23:58:39.980324       1 service.go:301] Service svc-latency-2393/latency-svc-84jwb updated: 0 ports\nI1006 23:58:39.993826       1 service.go:301] Service svc-latency-2393/latency-svc-86bkf updated: 0 ports\nI1006 23:58:40.004735       1 service.go:301] Service svc-latency-2393/latency-svc-88vj8 updated: 0 ports\nI1006 23:58:40.021706       1 service.go:301] Service svc-latency-2393/latency-svc-895d9 updated: 0 ports\nI1006 23:58:40.048444       1 service.go:301] Service svc-latency-2393/latency-svc-8bv9v updated: 0 ports\nI1006 23:58:40.072956       1 service.go:301] Service svc-latency-2393/latency-svc-8cjfh updated: 0 ports\nI1006 23:58:40.087371       1 service.go:301] Service svc-latency-2393/latency-svc-8lxw5 updated: 0 ports\nI1006 23:58:40.095567       1 service.go:301] Service svc-latency-2393/latency-svc-8p67f updated: 0 ports\nI1006 23:58:40.111667       1 service.go:301] Service svc-latency-2393/latency-svc-8rhqc updated: 0 ports\nI1006 23:58:40.124884       1 service.go:301] Service svc-latency-2393/latency-svc-8t9bn updated: 0 ports\nI1006 23:58:40.137437       1 service.go:301] Service svc-latency-2393/latency-svc-8tzdd updated: 0 ports\nI1006 23:58:40.152306       1 service.go:301] Service svc-latency-2393/latency-svc-95dd8 updated: 0 ports\nI1006 23:58:40.177958       1 service.go:301] Service svc-latency-2393/latency-svc-9jfgq updated: 0 ports\nI1006 23:58:40.198020       1 service.go:301] Service svc-latency-2393/latency-svc-9mctz updated: 0 ports\nI1006 23:58:40.216972       1 service.go:301] Service svc-latency-2393/latency-svc-9vlxv updated: 0 ports\nI1006 23:58:40.231077       1 service.go:301] Service svc-latency-2393/latency-svc-9wr74 updated: 0 ports\nI1006 23:58:40.261790       1 service.go:301] Service svc-latency-2393/latency-svc-9z82n updated: 0 ports\nI1006 23:58:40.262091       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6vb2d\"\nI1006 23:58:40.262194       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9vlxv\"\nI1006 23:58:40.262351       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9z82n\"\nI1006 23:58:40.262524       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-5xvq5\"\nI1006 23:58:40.262619       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-65x5t\"\nI1006 23:58:40.262719       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6fzt9\"\nI1006 23:58:40.262797       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-72blr\"\nI1006 23:58:40.262953       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8cjfh\"\nI1006 23:58:40.262969       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-95dd8\"\nI1006 23:58:40.262978       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4wc8j\"\nI1006 23:58:40.262987       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9jfgq\"\nI1006 23:58:40.262997       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9wr74\"\nI1006 23:58:40.263004       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-849zs\"\nI1006 23:58:40.263060       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8t9bn\"\nI1006 23:58:40.263072       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4qvp2\"\nI1006 23:58:40.263081       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4rntd\"\nI1006 23:58:40.263089       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6f9l6\"\nI1006 23:58:40.263097       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6n5j5\"\nI1006 23:58:40.263106       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6tq7k\"\nI1006 23:58:40.263114       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-7qttt\"\nI1006 23:58:40.263192       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8tzdd\"\nI1006 23:58:40.263278       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4sq28\"\nI1006 23:58:40.263362       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-5jh9j\"\nI1006 23:58:40.263376       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-644q4\"\nI1006 23:58:40.263384       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-78249\"\nI1006 23:58:40.263391       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8lxw5\"\nI1006 23:58:40.263399       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8p67f\"\nI1006 23:58:40.263725       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-57dmk\"\nI1006 23:58:40.263774       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-7f76x\"\nI1006 23:58:40.263851       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-84jwb\"\nI1006 23:58:40.263861       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8rhqc\"\nI1006 23:58:40.263870       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9mctz\"\nI1006 23:58:40.263975       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-5z6dn\"\nI1006 23:58:40.264047       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6s79h\"\nI1006 23:58:40.264149       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6wvqq\"\nI1006 23:58:40.264165       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-86bkf\"\nI1006 23:58:40.264174       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-88vj8\"\nI1006 23:58:40.264279       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-895d9\"\nI1006 23:58:40.264336       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8bv9v\"\nI1006 23:58:40.265289       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:40.299998       1 service.go:301] Service svc-latency-2393/latency-svc-9znmp updated: 0 ports\nI1006 23:58:40.321485       1 service.go:301] Service svc-latency-2393/latency-svc-b47ct updated: 0 ports\nI1006 23:58:40.338709       1 service.go:301] Service svc-latency-2393/latency-svc-b8cp5 updated: 0 ports\nI1006 23:58:40.370039       1 service.go:301] Service svc-latency-2393/latency-svc-bfk2f updated: 0 ports\nI1006 23:58:40.402682       1 service.go:301] Service svc-latency-2393/latency-svc-bgk5c updated: 0 ports\nI1006 23:58:40.439551       1 service.go:301] Service svc-latency-2393/latency-svc-bhkxf updated: 0 ports\nI1006 23:58:40.482292       1 service.go:301] Service svc-latency-2393/latency-svc-bjp7n updated: 0 ports\nI1006 23:58:40.518358       1 service.go:301] Service svc-latency-2393/latency-svc-bqrrs updated: 0 ports\nI1006 23:58:40.563153       1 service.go:301] Service svc-latency-2393/latency-svc-brlrb updated: 0 ports\nI1006 23:58:40.565529       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"303.427141ms\"\nI1006 23:58:40.597406       1 service.go:301] Service svc-latency-2393/latency-svc-c5kdc updated: 0 ports\nI1006 23:58:40.616377       1 service.go:301] Service svc-latency-2393/latency-svc-c8968 updated: 0 ports\nI1006 23:58:40.630985       1 service.go:301] Service svc-latency-2393/latency-svc-c9bqk updated: 0 ports\nI1006 23:58:40.641100       1 service.go:301] Service svc-latency-2393/latency-svc-ccshc updated: 0 ports\nI1006 23:58:40.652667       1 service.go:301] Service svc-latency-2393/latency-svc-ck88l updated: 0 ports\nI1006 23:58:40.669860       1 service.go:301] Service svc-latency-2393/latency-svc-cl57v updated: 0 ports\nI1006 23:58:40.688820       1 service.go:301] Service svc-latency-2393/latency-svc-cwcq2 updated: 0 ports\nI1006 23:58:40.718167       1 service.go:301] Service svc-latency-2393/latency-svc-cwhx4 updated: 0 ports\nI1006 23:58:40.743062       1 service.go:301] Service svc-latency-2393/latency-svc-d4nf7 updated: 0 ports\nI1006 23:58:40.752410       1 service.go:301] Service svc-latency-2393/latency-svc-d7x2s updated: 0 ports\nI1006 23:58:40.781020       1 service.go:301] Service svc-latency-2393/latency-svc-dd9dt updated: 0 ports\nI1006 23:58:40.804131       1 service.go:301] Service svc-latency-2393/latency-svc-dhlgm updated: 0 ports\nI1006 23:58:40.818312       1 service.go:301] Service svc-latency-2393/latency-svc-dsrz7 updated: 0 ports\nI1006 23:58:40.827840       1 service.go:301] Service svc-latency-2393/latency-svc-dtwqp updated: 0 ports\nI1006 23:58:40.841038       1 service.go:301] Service svc-latency-2393/latency-svc-f5vlj updated: 0 ports\nI1006 23:58:40.867954       1 service.go:301] Service svc-latency-2393/latency-svc-f8mr2 updated: 0 ports\nI1006 23:58:40.880629       1 service.go:301] Service svc-latency-2393/latency-svc-f9f5k updated: 0 ports\nI1006 23:58:40.900203       1 service.go:301] Service svc-latency-2393/latency-svc-fmp5f updated: 0 ports\nI1006 23:58:40.930938       1 service.go:301] Service svc-latency-2393/latency-svc-fq2rs updated: 0 ports\nI1006 23:58:40.943281       1 service.go:301] Service svc-latency-2393/latency-svc-ftpls updated: 0 ports\nI1006 23:58:40.959619       1 service.go:301] Service svc-latency-2393/latency-svc-fvxws updated: 0 ports\nI1006 23:58:41.006701       1 service.go:301] Service svc-latency-2393/latency-svc-fztg4 updated: 0 ports\nI1006 23:58:41.034960       1 service.go:301] Service svc-latency-2393/latency-svc-g9zgv updated: 0 ports\nI1006 23:58:41.050441       1 service.go:301] Service svc-latency-2393/latency-svc-ggr9f updated: 0 ports\nI1006 23:58:41.067949       1 service.go:301] Service svc-latency-2393/latency-svc-ghnmt updated: 0 ports\nI1006 23:58:41.077973       1 service.go:301] Service svc-latency-2393/latency-svc-gk844 updated: 0 ports\nI1006 23:58:41.099765       1 service.go:301] Service svc-latency-2393/latency-svc-gnp8t updated: 0 ports\nI1006 23:58:41.116405       1 service.go:301] Service svc-latency-2393/latency-svc-gv4lt updated: 0 ports\nI1006 23:58:41.136985       1 service.go:301] Service svc-latency-2393/latency-svc-h467x updated: 0 ports\nI1006 23:58:41.147853       1 service.go:301] Service svc-latency-2393/latency-svc-hbrxx updated: 0 ports\nI1006 23:58:41.165481       1 service.go:301] Service svc-latency-2393/latency-svc-hfbbl updated: 0 ports\nI1006 23:58:41.178614       1 service.go:301] Service svc-latency-2393/latency-svc-hr77g updated: 0 ports\nI1006 23:58:41.211682       1 service.go:301] Service svc-latency-2393/latency-svc-j7zmj updated: 0 ports\nI1006 23:58:41.240836       1 service.go:301] Service svc-latency-2393/latency-svc-jbm7q updated: 0 ports\nI1006 23:58:41.240979       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-g9zgv\"\nI1006 23:58:41.241052       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-gk844\"\nI1006 23:58:41.241113       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ck88l\"\nI1006 23:58:41.241131       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-cl57v\"\nI1006 23:58:41.241180       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-d7x2s\"\nI1006 23:58:41.241208       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-dd9dt\"\nI1006 23:58:41.241216       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-dtwqp\"\nI1006 23:58:41.241265       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-fztg4\"\nI1006 23:58:41.241296       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-gv4lt\"\nI1006 23:58:41.241856       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-b8cp5\"\nI1006 23:58:41.241893       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-cwhx4\"\nI1006 23:58:41.241905       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-dsrz7\"\nI1006 23:58:41.241939       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-f9f5k\"\nI1006 23:58:41.241956       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ghnmt\"\nI1006 23:58:41.241971       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-hfbbl\"\nI1006 23:58:41.242021       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-j7zmj\"\nI1006 23:58:41.242039       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-bgk5c\"\nI1006 23:58:41.242061       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-bhkxf\"\nI1006 23:58:41.242108       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-d4nf7\"\nI1006 23:58:41.242126       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-dhlgm\"\nI1006 23:58:41.242156       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-fmp5f\"\nI1006 23:58:41.242206       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-hbrxx\"\nI1006 23:58:41.242235       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9znmp\"\nI1006 23:58:41.242281       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-bfk2f\"\nI1006 23:58:41.242302       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-brlrb\"\nI1006 23:58:41.242330       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-c5kdc\"\nI1006 23:58:41.242377       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-hr77g\"\nI1006 23:58:41.242410       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-bqrrs\"\nI1006 23:58:41.242459       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ftpls\"\nI1006 23:58:41.242477       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-jbm7q\"\nI1006 23:58:41.242499       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-c9bqk\"\nI1006 23:58:41.242545       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-fq2rs\"\nI1006 23:58:41.242564       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ggr9f\"\nI1006 23:58:41.242593       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-gnp8t\"\nI1006 23:58:41.242638       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-f5vlj\"\nI1006 23:58:41.242659       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-fvxws\"\nI1006 23:58:41.242687       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-h467x\"\nI1006 23:58:41.242734       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-b47ct\"\nI1006 23:58:41.242761       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-bjp7n\"\nI1006 23:58:41.242805       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-c8968\"\nI1006 23:58:41.242826       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ccshc\"\nI1006 23:58:41.242938       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-cwcq2\"\nI1006 23:58:41.242957       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-f8mr2\"\nI1006 23:58:41.243566       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:41.268677       1 service.go:301] Service svc-latency-2393/latency-svc-jmqfb updated: 0 ports\nI1006 23:58:41.292057       1 service.go:301] Service svc-latency-2393/latency-svc-jn2lf updated: 0 ports\nI1006 23:58:41.319306       1 service.go:301] Service svc-latency-2393/latency-svc-jw9bd updated: 0 ports\nI1006 23:58:41.347122       1 service.go:301] Service svc-latency-2393/latency-svc-k4jsf updated: 0 ports\nI1006 23:58:41.394635       1 service.go:301] Service svc-latency-2393/latency-svc-kgc4n updated: 0 ports\nI1006 23:58:41.476725       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"235.799432ms\"\nI1006 23:58:41.495353       1 service.go:301] Service svc-latency-2393/latency-svc-kmgds updated: 0 ports\nI1006 23:58:41.550488       1 service.go:301] Service svc-latency-2393/latency-svc-kqcz4 updated: 0 ports\nI1006 23:58:41.605089       1 service.go:301] Service svc-latency-2393/latency-svc-kt9c2 updated: 0 ports\nI1006 23:58:41.628945       1 service.go:301] Service svc-latency-2393/latency-svc-kwdlc updated: 0 ports\nI1006 23:58:41.640209       1 service.go:301] Service svc-latency-2393/latency-svc-l2rmx updated: 0 ports\nI1006 23:58:41.653980       1 service.go:301] Service svc-latency-2393/latency-svc-ldldr updated: 0 ports\nI1006 23:58:41.681206       1 service.go:301] Service svc-latency-2393/latency-svc-ldq82 updated: 0 ports\nI1006 23:58:41.705366       1 service.go:301] Service svc-latency-2393/latency-svc-ljwk7 updated: 0 ports\nI1006 23:58:41.738101       1 service.go:301] Service svc-latency-2393/latency-svc-lkcg4 updated: 0 ports\nI1006 23:58:41.763947       1 service.go:301] Service svc-latency-2393/latency-svc-lmhqf updated: 0 ports\nI1006 23:58:41.785720       1 service.go:301] Service svc-latency-2393/latency-svc-lz54r updated: 0 ports\nI1006 23:58:41.803104       1 service.go:301] Service svc-latency-2393/latency-svc-m5g7n updated: 0 ports\nI1006 23:58:41.820274       1 service.go:301] Service svc-latency-2393/latency-svc-mjtlp updated: 0 ports\nI1006 23:58:41.830903       1 service.go:301] Service svc-latency-2393/latency-svc-ml7jn updated: 0 ports\nI1006 23:58:41.854184       1 service.go:301] Service svc-latency-2393/latency-svc-mpn8x updated: 0 ports\nI1006 23:58:41.876255       1 service.go:301] Service svc-latency-2393/latency-svc-mqgrv updated: 0 ports\nI1006 23:58:41.899697       1 service.go:301] Service svc-latency-2393/latency-svc-n4pq2 updated: 0 ports\nI1006 23:58:41.913310       1 service.go:301] Service svc-latency-2393/latency-svc-n7qcb updated: 0 ports\nI1006 23:58:41.931557       1 service.go:301] Service svc-latency-2393/latency-svc-n7xzt updated: 0 ports\nI1006 23:58:41.971308       1 service.go:301] Service svc-latency-2393/latency-svc-nb9qf updated: 0 ports\nI1006 23:58:41.993897       1 service.go:301] Service svc-latency-2393/latency-svc-ncgrr updated: 0 ports\nI1006 23:58:42.007367       1 service.go:301] Service svc-latency-2393/latency-svc-nkxnz updated: 0 ports\nI1006 23:58:42.024609       1 service.go:301] Service svc-latency-2393/latency-svc-nr2rl updated: 0 ports\nI1006 23:58:42.062589       1 service.go:301] Service svc-latency-2393/latency-svc-ntzqt updated: 0 ports\nI1006 23:58:42.138141       1 service.go:301] Service svc-latency-2393/latency-svc-nzkkj updated: 0 ports\nI1006 23:58:42.175524       1 service.go:301] Service svc-latency-2393/latency-svc-p5gsf updated: 0 ports\nI1006 23:58:42.232470       1 service.go:301] Service svc-latency-2393/latency-svc-p6hmv updated: 0 ports\nI1006 23:58:42.308462       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-lz54r\"\nI1006 23:58:42.308507       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-m5g7n\"\nI1006 23:58:42.308516       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-mjtlp\"\nI1006 23:58:42.308563       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-p5gsf\"\nI1006 23:58:42.308583       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-jmqfb\"\nI1006 23:58:42.308608       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-n7qcb\"\nI1006 23:58:42.308641       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-nb9qf\"\nI1006 23:58:42.308649       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ncgrr\"\nI1006 23:58:42.308657       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-nkxnz\"\nI1006 23:58:42.308666       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-jn2lf\"\nI1006 23:58:42.308699       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-k4jsf\"\nI1006 23:58:42.308723       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-n7xzt\"\nI1006 23:58:42.308735       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-kgc4n\"\nI1006 23:58:42.308743       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ldldr\"\nI1006 23:58:42.308752       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ldq82\"\nI1006 23:58:42.308832       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-p6hmv\"\nI1006 23:58:42.308848       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-kmgds\"\nI1006 23:58:42.309000       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ml7jn\"\nI1006 23:58:42.309020       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-mqgrv\"\nI1006 23:58:42.309031       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-kwdlc\"\nI1006 23:58:42.309044       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ljwk7\"\nI1006 23:58:42.309178       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-lmhqf\"\nI1006 23:58:42.309198       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-nr2rl\"\nI1006 23:58:42.309318       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-nzkkj\"\nI1006 23:58:42.309339       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-jw9bd\"\nI1006 23:58:42.309366       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-kqcz4\"\nI1006 23:58:42.309391       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-kt9c2\"\nI1006 23:58:42.309401       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-l2rmx\"\nI1006 23:58:42.309410       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-lkcg4\"\nI1006 23:58:42.309418       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-mpn8x\"\nI1006 23:58:42.309548       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-n4pq2\"\nI1006 23:58:42.309571       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ntzqt\"\nI1006 23:58:42.311112       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:42.330308       1 service.go:301] Service svc-latency-2393/latency-svc-ph7hq updated: 0 ports\nI1006 23:58:42.437148       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"128.67175ms\"\nI1006 23:58:42.490623       1 service.go:301] Service svc-latency-2393/latency-svc-pjp4l updated: 0 ports\nI1006 23:58:42.550979       1 service.go:301] Service svc-latency-2393/latency-svc-pkdj4 updated: 0 ports\nI1006 23:58:42.644712       1 service.go:301] Service svc-latency-2393/latency-svc-ptrkr updated: 0 ports\nI1006 23:58:42.669701       1 service.go:301] Service svc-latency-2393/latency-svc-pws45 updated: 0 ports\nI1006 23:58:42.689725       1 service.go:301] Service svc-latency-2393/latency-svc-q5pc8 updated: 0 ports\nI1006 23:58:42.708058       1 service.go:301] Service svc-latency-2393/latency-svc-q6rjz updated: 0 ports\nI1006 23:58:42.746157       1 service.go:301] Service svc-latency-2393/latency-svc-q7xnp updated: 0 ports\nI1006 23:58:42.779451       1 service.go:301] Service svc-latency-2393/latency-svc-q84bn updated: 0 ports\nI1006 23:58:42.801108       1 service.go:301] Service svc-latency-2393/latency-svc-q8npc updated: 0 ports\nI1006 23:58:42.821714       1 service.go:301] Service svc-latency-2393/latency-svc-qrk2d updated: 0 ports\nI1006 23:58:42.832475       1 service.go:301] Service svc-latency-2393/latency-svc-qs9qp updated: 0 ports\nI1006 23:58:42.850343       1 service.go:301] Service svc-latency-2393/latency-svc-qsxlg updated: 0 ports\nI1006 23:58:42.873893       1 service.go:301] Service svc-latency-2393/latency-svc-r57pf updated: 0 ports\nI1006 23:58:42.898279       1 service.go:301] Service svc-latency-2393/latency-svc-r5l7x updated: 0 ports\nI1006 23:58:42.928024       1 service.go:301] Service svc-latency-2393/latency-svc-r5wxn updated: 0 ports\nI1006 23:58:42.960624       1 service.go:301] Service svc-latency-2393/latency-svc-r76jp updated: 0 ports\nI1006 23:58:43.013980       1 service.go:301] Service svc-latency-2393/latency-svc-rbgp8 updated: 0 ports\nI1006 23:58:43.045945       1 service.go:301] Service svc-latency-2393/latency-svc-rbqdn updated: 0 ports\nI1006 23:58:43.079015       1 service.go:301] Service svc-latency-2393/latency-svc-rktpx updated: 0 ports\nI1006 23:58:43.093291       1 service.go:301] Service svc-latency-2393/latency-svc-rmhph updated: 0 ports\nI1006 23:58:43.106589       1 service.go:301] Service svc-latency-2393/latency-svc-rq2f4 updated: 0 ports\nI1006 23:58:43.153645       1 service.go:301] Service svc-latency-2393/latency-svc-rtrd7 updated: 0 ports\nI1006 23:58:43.196506       1 service.go:301] Service svc-latency-2393/latency-svc-rvbxj updated: 0 ports\nI1006 23:58:43.213699       1 service.go:301] Service svc-latency-2393/latency-svc-rvg8x updated: 0 ports\nI1006 23:58:43.238944       1 service.go:301] Service svc-latency-2393/latency-svc-s5x4w updated: 0 ports\nI1006 23:58:43.239309       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-q5pc8\"\nI1006 23:58:43.239500       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-qs9qp\"\nI1006 23:58:43.239670       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-r5l7x\"\nI1006 23:58:43.239839       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rbqdn\"\nI1006 23:58:43.240020       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-r76jp\"\nI1006 23:58:43.240174       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rvbxj\"\nI1006 23:58:43.240351       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ph7hq\"\nI1006 23:58:43.240498       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-pws45\"\nI1006 23:58:43.240658       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-q6rjz\"\nI1006 23:58:43.240806       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-qrk2d\"\nI1006 23:58:43.240980       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-qsxlg\"\nI1006 23:58:43.241145       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-r57pf\"\nI1006 23:58:43.241164       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rtrd7\"\nI1006 23:58:43.241224       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-s5x4w\"\nI1006 23:58:43.241239       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-pjp4l\"\nI1006 23:58:43.241248       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-pkdj4\"\nI1006 23:58:43.241256       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-q7xnp\"\nI1006 23:58:43.241321       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-q8npc\"\nI1006 23:58:43.241332       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rbgp8\"\nI1006 23:58:43.241341       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rq2f4\"\nI1006 23:58:43.241389       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ptrkr\"\nI1006 23:58:43.241406       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-q84bn\"\nI1006 23:58:43.241431       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-r5wxn\"\nI1006 23:58:43.241440       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rktpx\"\nI1006 23:58:43.241495       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rmhph\"\nI1006 23:58:43.241511       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rvg8x\"\nI1006 23:58:43.242099       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:43.277686       1 service.go:301] Service svc-latency-2393/latency-svc-s7pjz updated: 0 ports\nI1006 23:58:43.305705       1 service.go:301] Service svc-latency-2393/latency-svc-sg2w2 updated: 0 ports\nI1006 23:58:43.338356       1 service.go:301] Service svc-latency-2393/latency-svc-sg7mj updated: 0 ports\nI1006 23:58:43.401660       1 service.go:301] Service svc-latency-2393/latency-svc-sm4rn updated: 0 ports\nI1006 23:58:43.411078       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"171.756718ms\"\nI1006 23:58:43.430845       1 service.go:301] Service svc-latency-2393/latency-svc-snxws updated: 0 ports\nI1006 23:58:43.447293       1 service.go:301] Service svc-latency-2393/latency-svc-sqjb2 updated: 0 ports\nI1006 23:58:43.462634       1 service.go:301] Service svc-latency-2393/latency-svc-svvp8 updated: 0 ports\nI1006 23:58:43.478547       1 service.go:301] Service svc-latency-2393/latency-svc-sx9kt updated: 0 ports\nI1006 23:58:43.506905       1 service.go:301] Service svc-latency-2393/latency-svc-sxl4j updated: 0 ports\nI1006 23:58:43.534722       1 service.go:301] Service svc-latency-2393/latency-svc-t5tpd updated: 0 ports\nI1006 23:58:43.555134       1 service.go:301] Service svc-latency-2393/latency-svc-t7d5t updated: 0 ports\nI1006 23:58:43.579028       1 service.go:301] Service svc-latency-2393/latency-svc-t7dgc updated: 0 ports\nI1006 23:58:43.597000       1 service.go:301] Service svc-latency-2393/latency-svc-t879p updated: 0 ports\nI1006 23:58:43.613758       1 service.go:301] Service svc-latency-2393/latency-svc-thr85 updated: 0 ports\nI1006 23:58:43.637573       1 service.go:301] Service svc-latency-2393/latency-svc-tp5sl updated: 0 ports\nI1006 23:58:43.650582       1 service.go:301] Service svc-latency-2393/latency-svc-trffr updated: 0 ports\nI1006 23:58:43.674077       1 service.go:301] Service svc-latency-2393/latency-svc-twbnv updated: 0 ports\nI1006 23:58:43.688772       1 service.go:301] Service svc-latency-2393/latency-svc-tx546 updated: 0 ports\nI1006 23:58:43.706540       1 service.go:301] Service svc-latency-2393/latency-svc-tzfpz updated: 0 ports\nI1006 23:58:43.734306       1 service.go:301] Service svc-latency-2393/latency-svc-v8qq2 updated: 0 ports\nI1006 23:58:43.754226       1 service.go:301] Service svc-latency-2393/latency-svc-vgb5r updated: 0 ports\nI1006 23:58:43.782997       1 service.go:301] Service svc-latency-2393/latency-svc-vmbwr updated: 0 ports\nI1006 23:58:43.799027       1 service.go:301] Service svc-latency-2393/latency-svc-vrx8c updated: 0 ports\nI1006 23:58:43.812569       1 service.go:301] Service svc-latency-2393/latency-svc-vsss7 updated: 0 ports\nI1006 23:58:43.837937       1 service.go:301] Service svc-latency-2393/latency-svc-vxmjf updated: 0 ports\nI1006 23:58:43.849958       1 service.go:301] Service svc-latency-2393/latency-svc-wm7kf updated: 0 ports\nI1006 23:58:43.887106       1 service.go:301] Service svc-latency-2393/latency-svc-wrpzl updated: 0 ports\nI1006 23:58:43.894194       1 service.go:301] Service svc-latency-2393/latency-svc-ww5ng updated: 0 ports\nI1006 23:58:43.901172       1 service.go:301] Service svc-latency-2393/latency-svc-wwvrh updated: 0 ports\nI1006 23:58:43.916893       1 service.go:301] Service svc-latency-2393/latency-svc-x7857 updated: 0 ports\nI1006 23:58:43.929317       1 service.go:301] Service svc-latency-2393/latency-svc-x96jb updated: 0 ports\nI1006 23:58:43.949105       1 service.go:301] Service svc-latency-2393/latency-svc-xdldk updated: 0 ports\nI1006 23:58:43.959673       1 service.go:301] Service svc-latency-2393/latency-svc-xg4ff updated: 0 ports\nI1006 23:58:43.971602       1 service.go:301] Service svc-latency-2393/latency-svc-xh9p9 updated: 0 ports\nI1006 23:58:43.990269       1 service.go:301] Service svc-latency-2393/latency-svc-xjw58 updated: 0 ports\nI1006 23:58:43.997300       1 service.go:301] Service svc-latency-2393/latency-svc-xpdff updated: 0 ports\nI1006 23:58:44.008473       1 service.go:301] Service svc-latency-2393/latency-svc-xqd76 updated: 0 ports\nI1006 23:58:44.023433       1 service.go:301] Service svc-latency-2393/latency-svc-xs5sp updated: 0 ports\nI1006 23:58:44.032591       1 service.go:301] Service svc-latency-2393/latency-svc-xspnn updated: 0 ports\nI1006 23:58:44.047102       1 service.go:301] Service svc-latency-2393/latency-svc-z29ch updated: 0 ports\nI1006 23:58:44.062883       1 service.go:301] Service svc-latency-2393/latency-svc-z4dpb updated: 0 ports\nI1006 23:58:44.068595       1 service.go:301] Service svc-latency-2393/latency-svc-z6dpv updated: 0 ports\nI1006 23:58:44.077222       1 service.go:301] Service svc-latency-2393/latency-svc-zgfcn updated: 0 ports\nI1006 23:58:44.087785       1 service.go:301] Service svc-latency-2393/latency-svc-zgz8f updated: 0 ports\nI1006 23:58:44.107916       1 service.go:301] Service svc-latency-2393/latency-svc-zjs27 updated: 0 ports\nI1006 23:58:44.122378       1 service.go:301] Service svc-latency-2393/latency-svc-zkx4z updated: 0 ports\nI1006 23:58:44.130983       1 service.go:301] Service svc-latency-2393/latency-svc-zr7lv updated: 0 ports\nI1006 23:58:44.236622       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-tzfpz\"\nI1006 23:58:44.236903       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-wm7kf\"\nI1006 23:58:44.237142       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-z6dpv\"\nI1006 23:58:44.237297       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-twbnv\"\nI1006 23:58:44.237434       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-t7d5t\"\nI1006 23:58:44.237598       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-t7dgc\"\nI1006 23:58:44.237781       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-vgb5r\"\nI1006 23:58:44.237990       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-vrx8c\"\nI1006 23:58:44.238227       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-wrpzl\"\nI1006 23:58:44.238889       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xqd76\"\nI1006 23:58:44.239070       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-z29ch\"\nI1006 23:58:44.239212       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-s7pjz\"\nI1006 23:58:44.239566       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-t5tpd\"\nI1006 23:58:44.239704       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-trffr\"\nI1006 23:58:44.239839       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-vsss7\"\nI1006 23:58:44.239976       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-vxmjf\"\nI1006 23:58:44.240093       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-x96jb\"\nI1006 23:58:44.240203       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xjw58\"\nI1006 23:58:44.241003       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xpdff\"\nI1006 23:58:44.241146       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sxl4j\"\nI1006 23:58:44.241360       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-zgz8f\"\nI1006 23:58:44.241583       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xspnn\"\nI1006 23:58:44.242073       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sm4rn\"\nI1006 23:58:44.242236       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sqjb2\"\nI1006 23:58:44.242413       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-thr85\"\nI1006 23:58:44.242539       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ww5ng\"\nI1006 23:58:44.242693       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-x7857\"\nI1006 23:58:44.242819       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xdldk\"\nI1006 23:58:44.243039       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-zkx4z\"\nI1006 23:58:44.243163       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sg2w2\"\nI1006 23:58:44.243327       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-zr7lv\"\nI1006 23:58:44.243439       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-zgfcn\"\nI1006 23:58:44.243771       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-v8qq2\"\nI1006 23:58:44.243900       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-vmbwr\"\nI1006 23:58:44.244023       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-wwvrh\"\nI1006 23:58:44.244129       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xh9p9\"\nI1006 23:58:44.244248       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xs5sp\"\nI1006 23:58:44.244351       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-tp5sl\"\nI1006 23:58:44.244447       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-snxws\"\nI1006 23:58:44.244543       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-svvp8\"\nI1006 23:58:44.244643       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sx9kt\"\nI1006 23:58:44.244740       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-t879p\"\nI1006 23:58:44.244843       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sg7mj\"\nI1006 23:58:44.244948       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xg4ff\"\nI1006 23:58:44.245055       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-z4dpb\"\nI1006 23:58:44.245150       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-zjs27\"\nI1006 23:58:44.245249       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-tx546\"\nI1006 23:58:44.245825       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:44.365280       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"128.642829ms\"\nI1006 23:58:45.368807       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:45.474227       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"107.22649ms\"\nI1006 23:58:51.590169       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:51.673958       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"83.887836ms\"\nI1006 23:58:51.674165       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:51.728559       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"54.548708ms\"\nI1006 23:58:53.422937       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:53.497102       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"74.435819ms\"\nI1006 23:58:54.497853       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:54.599611       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"101.908302ms\"\nI1006 23:58:55.764584       1 service.go:301] Service dns-8021/test-service-2 updated: 1 ports\nI1006 23:58:55.764648       1 service.go:416] Adding new service port \"dns-8021/test-service-2:http\" at 100.64.41.120:80/TCP\nI1006 23:58:55.765456       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:55.840265       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"75.593453ms\"\nI1006 23:58:55.840542       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:55.895454       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"55.132839ms\"\nI1006 23:59:00.712690       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:00.800547       1 service.go:301] Service services-7315/service-headless-toggled updated: 0 ports\nI1006 23:59:00.805684       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"93.104569ms\"\nI1006 23:59:00.805732       1 service.go:441] Removing service port \"services-7315/service-headless-toggled\"\nI1006 23:59:00.807342       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:00.936190       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"130.439449ms\"\nI1006 23:59:01.659949       1 service.go:301] Service services-327/nodeport-collision-1 updated: 1 ports\nI1006 23:59:01.763228       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:01.819133       1 service.go:301] Service services-327/nodeport-collision-2 updated: 1 ports\nI1006 23:59:01.871344       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"108.257779ms\"\nI1006 23:59:02.577106       1 service.go:301] Service services-2188/externalsvc updated: 0 ports\nI1006 23:59:02.874947       1 service.go:441] Removing service port \"services-2188/externalsvc\"\nI1006 23:59:02.875356       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:02.995608       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"120.661193ms\"\nI1006 23:59:06.371634       1 service.go:301] Service kubectl-3758/agnhost-primary updated: 1 ports\nI1006 23:59:06.372005       1 service.go:416] Adding new service port \"kubectl-3758/agnhost-primary\" at 100.66.191.211:6379/TCP\nI1006 23:59:06.372295       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:06.451804       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"79.779988ms\"\nI1006 23:59:06.452000       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:06.574996       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"123.124548ms\"\nI1006 23:59:08.430820       1 service.go:301] Service apply-2176/test-svc updated: 1 ports\nI1006 23:59:08.430909       1 service.go:416] Adding new service port \"apply-2176/test-svc\" at 100.71.29.57:8080/UDP\nI1006 23:59:08.431020       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:08.593551       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"162.628834ms\"\nI1006 23:59:09.090954       1 service.go:301] Service services-6046/nodeport-range-test updated: 1 ports\nI1006 23:59:09.091501       1 service.go:416] Adding new service port \"services-6046/nodeport-range-test\" at 100.66.215.184:80/TCP\nI1006 23:59:09.091803       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:09.242809       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-6046/nodeport-range-test\\\" (:32015/tcp4)\"\nI1006 23:59:09.251370       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"159.87831ms\"\nI1006 23:59:09.253964       1 service.go:301] Service services-6046/nodeport-range-test updated: 0 ports\nI1006 23:59:10.172222       1 service.go:441] Removing service port \"services-6046/nodeport-range-test\"\nI1006 23:59:10.172637       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:10.244714       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"72.478845ms\"\nI1006 23:59:12.104795       1 service.go:301] Service kubectl-3758/agnhost-primary updated: 0 ports\nI1006 23:59:12.105023       1 service.go:441] Removing service port \"kubectl-3758/agnhost-primary\"\nI1006 23:59:12.105280       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:12.251086       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"146.107861ms\"\nI1006 23:59:12.251493       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:12.345684       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"94.296344ms\"\nI1006 23:59:13.709704       1 service.go:301] Service apply-2176/test-svc updated: 0 ports\nI1006 23:59:13.710047       1 service.go:441] Removing service port \"apply-2176/test-svc\"\nI1006 23:59:13.710347       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:13.837977       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"127.920419ms\"\nI1006 23:59:26.630393       1 service.go:301] Service services-542/test-service-hsr9n updated: 1 ports\nI1006 23:59:26.630457       1 service.go:416] Adding new service port \"services-542/test-service-hsr9n:http\" at 100.70.119.72:80/TCP\nI1006 23:59:26.630570       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:26.874678       1 service.go:301] Service services-542/test-service-hsr9n updated: 1 ports\nI1006 23:59:26.877118       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"246.650237ms\"\nI1006 23:59:26.877424       1 service.go:418] Updating existing service port \"services-542/test-service-hsr9n:http\" at 100.70.119.72:80/TCP\nI1006 23:59:26.877681       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:27.043294       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"165.886949ms\"\nI1006 23:59:27.146376       1 service.go:301] Service services-542/test-service-hsr9n updated: 0 ports\nI1006 23:59:28.043918       1 service.go:441] Removing service port \"services-542/test-service-hsr9n:http\"\nI1006 23:59:28.045050       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:28.169068       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"125.146461ms\"\nI1006 23:59:28.523774       1 service.go:301] Service webhook-3056/e2e-test-webhook updated: 1 ports\nI1006 23:59:29.171343       1 service.go:416] Adding new service port \"webhook-3056/e2e-test-webhook\" at 100.71.229.72:8443/TCP\nI1006 23:59:29.171543       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:29.343868       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"172.553841ms\"\nI1006 23:59:30.101196       1 service.go:301] Service webhook-3056/e2e-test-webhook updated: 0 ports\nI1006 23:59:30.101743       1 service.go:441] Removing service port \"webhook-3056/e2e-test-webhook\"\nI1006 23:59:30.101915       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:30.226055       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"124.784848ms\"\nI1006 23:59:31.226434       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:31.286562       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"60.184363ms\"\nI1006 23:59:33.153008       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:33.179970       1 service.go:301] Service dns-8021/test-service-2 updated: 0 ports\nI1006 23:59:33.298531       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"145.623433ms\"\nI1006 23:59:33.298737       1 service.go:441] Removing service port \"dns-8021/test-service-2:http\"\nI1006 23:59:33.298938       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:33.344246       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.48718ms\"\nI1006 23:59:39.805992       1 service.go:301] Service services-9864/nodeport-update-service updated: 1 ports\nI1006 23:59:39.806056       1 service.go:416] Adding new service port \"services-9864/nodeport-update-service\" at 100.67.203.79:80/TCP\nI1006 23:59:39.806616       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:39.895995       1 service.go:301] Service services-9864/nodeport-update-service updated: 1 ports\nI1006 23:59:39.923528       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"117.459848ms\"\nI1006 23:59:39.923684       1 service.go:416] Adding new service port \"services-9864/nodeport-update-service:tcp-port\" at 100.67.203.79:80/TCP\nI1006 23:59:39.923753       1 service.go:441] Removing service port \"services-9864/nodeport-update-service\"\nI1006 23:59:39.923955       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:40.027880       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-9864/nodeport-update-service:tcp-port\\\" (:31477/tcp4)\"\nI1006 23:59:40.043480       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"119.79737ms\"\nI1006 23:59:45.713592       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:45.803554       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"90.080149ms\"\nI1006 23:59:45.804850       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:45.917384       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"112.676667ms\"\nI1007 00:00:03.785072       1 service.go:301] Service services-9864/nodeport-update-service updated: 2 ports\nI1007 00:00:03.785310       1 service.go:418] Updating existing service port \"services-9864/nodeport-update-service:tcp-port\" at 100.67.203.79:80/TCP\nI1007 00:00:03.785451       1 service.go:416] Adding new service port \"services-9864/nodeport-update-service:udp-port\" at 100.67.203.79:80/UDP\nI1007 00:00:03.785666       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:03.890397       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-9864/nodeport-update-service:udp-port\\\" (:31794/udp4)\"\nI1007 00:00:03.899507       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"114.201926ms\"\nI1007 00:00:03.899867       1 proxier.go:829] \"Stale service\" protocol=\"udp\" svcPortName=\"services-9864/nodeport-update-service:udp-port\" clusterIP=\"100.67.203.79\"\nI1007 00:00:03.900058       1 proxier.go:839] Stale udp service NodePort services-9864/nodeport-update-service:udp-port -> 31794\nI1007 00:00:03.900160       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:04.013644       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"113.986808ms\"\nI1007 00:00:09.767145       1 service.go:301] Service deployment-6782/test-rolling-update-with-lb updated: 1 ports\nI1007 00:00:09.768540       1 service.go:416] Adding new service port \"deployment-6782/test-rolling-update-with-lb\" at 100.71.8.24:80/TCP\nI1007 00:00:09.768845       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:09.864133       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for deployment-6782/test-rolling-update-with-lb\\\" (:32437/tcp4)\"\nI1007 00:00:09.870103       1 service_health.go:98] Opening healthcheck \"deployment-6782/test-rolling-update-with-lb\" on port 30696\nI1007 00:00:09.870357       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"101.837581ms\"\nI1007 00:00:09.870574       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:10.003873       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"133.460645ms\"\nI1007 00:00:13.984885       1 service.go:301] Service proxy-9315/proxy-service-lgkv5 updated: 4 ports\nI1007 00:00:13.985273       1 service.go:416] Adding new service port \"proxy-9315/proxy-service-lgkv5:portname2\" at 100.64.141.200:81/TCP\nI1007 00:00:13.985416       1 service.go:416] Adding new service port \"proxy-9315/proxy-service-lgkv5:tlsportname1\" at 100.64.141.200:443/TCP\nI1007 00:00:13.985524       1 service.go:416] Adding new service port \"proxy-9315/proxy-service-lgkv5:tlsportname2\" at 100.64.141.200:444/TCP\nI1007 00:00:13.985677       1 service.go:416] Adding new service port \"proxy-9315/proxy-service-lgkv5:portname1\" at 100.64.141.200:80/TCP\nI1007 00:00:13.985927       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:14.253371       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"268.088846ms\"\nI1007 00:00:14.253766       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:14.422009       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"168.241018ms\"\nI1007 00:00:16.542554       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:16.664833       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"122.457053ms\"\nI1007 00:00:16.665126       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:16.808618       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"143.599816ms\"\nI1007 00:00:18.628638       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:18.764680       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"136.178865ms\"\nI1007 00:00:21.327044       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:21.417183       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"90.387666ms\"\nI1007 00:00:21.418392       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:21.564784       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"146.679449ms\"\nW1007 00:00:21.762064       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingvjssq\nW1007 00:00:21.789086       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing6pscf\nW1007 00:00:21.812603       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing97s62\nW1007 00:00:21.965407       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing97s62\nW1007 00:00:22.017629       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing97s62\nW1007 00:00:22.046434       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing97s62\nW1007 00:00:22.121030       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing6pscf\nW1007 00:00:22.127125       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingvjssq\nI1007 00:00:24.109963       1 service.go:301] Service webhook-8158/e2e-test-webhook updated: 1 ports\nI1007 00:00:24.110191       1 service.go:416] Adding new service port \"webhook-8158/e2e-test-webhook\" at 100.67.178.63:8443/TCP\nI1007 00:00:24.110462       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:24.209693       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"99.535892ms\"\nI1007 00:00:24.210185       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:24.303995       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"93.946431ms\"\nI1007 00:00:26.633243       1 service.go:301] Service proxy-9315/proxy-service-lgkv5 updated: 0 ports\nI1007 00:00:26.633487       1 service.go:441] Removing service port \"proxy-9315/proxy-service-lgkv5:portname1\"\nI1007 00:00:26.633611       1 service.go:441] Removing service port \"proxy-9315/proxy-service-lgkv5:portname2\"\nI1007 00:00:26.633688       1 service.go:441] Removing service port \"proxy-9315/proxy-service-lgkv5:tlsportname1\"\nI1007 00:00:26.633772       1 service.go:441] Removing service port \"proxy-9315/proxy-service-lgkv5:tlsportname2\"\nI1007 00:00:26.634062       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:26.756240       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"122.737399ms\"\nI1007 00:00:26.756566       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:26.853861       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"97.538858ms\"\nI1007 00:00:36.782447       1 service.go:301] Service services-9864/nodeport-update-service updated: 0 ports\nI1007 00:00:36.782497       1 service.go:441] Removing service port \"services-9864/nodeport-update-service:udp-port\"\nI1007 00:00:36.784993       1 service.go:441] Removing service port \"services-9864/nodeport-update-service:tcp-port\"\nI1007 00:00:36.785206       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:37.003111       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"220.596903ms\"\nI1007 00:00:37.003786       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:37.208173       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"204.517371ms\"\nI1007 00:00:38.338261       1 service.go:301] Service webhook-8158/e2e-test-webhook updated: 0 ports\nI1007 00:00:38.339523       1 service.go:441] Removing service port \"webhook-8158/e2e-test-webhook\"\nI1007 00:00:38.339887       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:38.508766       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"169.230023ms\"\nI1007 00:00:39.511131       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:39.704040       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"193.046253ms\"\nI1007 00:00:49.134902       1 service.go:301] Service deployment-6782/test-rolling-update-with-lb updated: 1 ports\nI1007 00:00:49.135370       1 service.go:418] Updating existing service port \"deployment-6782/test-rolling-update-with-lb\" at 100.71.8.24:80/TCP\nI1007 00:00:49.135605       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:49.199227       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"64.228608ms\"\nI1007 00:00:50.167101       1 service.go:301] Service services-2/tolerate-unready updated: 1 ports\nI1007 00:00:50.167164       1 service.go:416] Adding new service port \"services-2/tolerate-unready:http\" at 100.71.27.83:80/TCP\nI1007 00:00:50.167391       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:50.257898       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"90.722974ms\"\nI1007 00:00:50.258068       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:50.321529       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"63.569497ms\"\nI1007 00:00:52.704746       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:52.853031       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"148.383143ms\"\nI1007 00:00:52.853323       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:53.039107       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"185.930665ms\"\nI1007 00:00:54.040470       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:54.082329       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"42.015391ms\"\nI1007 00:00:55.672409       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:55.842989       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"170.696757ms\"\nI1007 00:00:55.843809       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:55.913017       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"69.356268ms\"\nI1007 00:00:59.104789       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:59.193906       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"89.229589ms\"\nI1007 00:01:02.648736       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:02.757275       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"108.639488ms\"\nI1007 00:01:02.757580       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:02.868599       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"111.172106ms\"\nI1007 00:01:04.211290       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:04.270799       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"59.624723ms\"\nI1007 00:01:07.216879       1 service.go:301] Service webhook-7555/e2e-test-webhook updated: 1 ports\nI1007 00:01:07.217213       1 service.go:416] Adding new service port \"webhook-7555/e2e-test-webhook\" at 100.68.71.133:8443/TCP\nI1007 00:01:07.217507       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:07.444585       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"227.404728ms\"\nI1007 00:01:07.445003       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:07.605939       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"161.081849ms\"\nI1007 00:01:08.474324       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:08.590211       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"115.995417ms\"\nI1007 00:01:08.791214       1 service.go:301] Service webhook-7555/e2e-test-webhook updated: 0 ports\nI1007 00:01:09.591738       1 service.go:441] Removing service port \"webhook-7555/e2e-test-webhook\"\nI1007 00:01:09.592214       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:09.771645       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"179.907671ms\"\nI1007 00:01:13.229247       1 service.go:301] Service webhook-6349/e2e-test-webhook updated: 1 ports\nI1007 00:01:13.229702       1 service.go:416] Adding new service port \"webhook-6349/e2e-test-webhook\" at 100.65.69.127:8443/TCP\nI1007 00:01:13.229912       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:13.293814       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"64.506197ms\"\nI1007 00:01:13.294076       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:13.351715       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"57.837136ms\"\nI1007 00:01:14.297336       1 service.go:301] Service services-2/tolerate-unready updated: 0 ports\nI1007 00:01:14.298186       1 service.go:441] Removing service port \"services-2/tolerate-unready:http\"\nI1007 00:01:14.298495       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:14.467446       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"170.009515ms\"\nI1007 00:01:15.469633       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:15.574621       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"105.135978ms\"\nI1007 00:01:16.233308       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:16.389630       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"156.820027ms\"\nI1007 00:01:20.219376       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:20.321528       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"102.289895ms\"\nI1007 00:01:20.321856       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:20.384534       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"62.953333ms\"\nI1007 00:01:21.385042       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:21.484222       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"99.446857ms\"\nI1007 00:01:31.034504       1 service.go:301] Service webhook-6349/e2e-test-webhook updated: 0 ports\nI1007 00:01:31.034557       1 service.go:441] Removing service port \"webhook-6349/e2e-test-webhook\"\nI1007 00:01:31.039004       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:31.200044       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"165.467385ms\"\nI1007 00:01:31.200405       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:31.343685       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"143.383851ms\"\nI1007 00:01:32.344217       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:32.429569       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"85.48977ms\"\nI1007 00:01:34.162152       1 service.go:301] Service services-9062/hairpin-test updated: 1 ports\nI1007 00:01:34.162514       1 service.go:416] Adding new service port \"services-9062/hairpin-test\" at 100.71.18.41:8080/TCP\nI1007 00:01:34.162885       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:34.310777       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"148.267014ms\"\nI1007 00:01:34.311105       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:34.406700       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"95.863726ms\"\nI1007 00:01:40.565999       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:40.636902       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"71.013304ms\"\nI1007 00:01:44.245828       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:44.364837       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"119.124203ms\"\nI1007 00:01:44.365359       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:44.469649       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"104.43003ms\"\nI1007 00:01:47.726672       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:47.891736       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"165.192745ms\"\nI1007 00:01:47.892221       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:48.035525       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"143.730071ms\"\nI1007 00:01:48.906143       1 service.go:301] Service conntrack-2227/svc-udp updated: 1 ports\nI1007 00:01:48.906203       1 service.go:416] Adding new service port \"conntrack-2227/svc-udp:udp\" at 100.69.243.183:80/UDP\nI1007 00:01:48.906327       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:49.026998       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for conntrack-2227/svc-udp:udp\\\" (:31123/udp4)\"\nI1007 00:01:49.043726       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"137.506772ms\"\nI1007 00:01:49.574043       1 service.go:301] Service services-6611/multi-endpoint-test updated: 2 ports\nI1007 00:01:50.048158       1 service.go:416] Adding new service port \"services-6611/multi-endpoint-test:portname1\" at 100.71.14.54:80/TCP\nI1007 00:01:50.048336       1 service.go:416] Adding new service port \"services-6611/multi-endpoint-test:portname2\" at 100.71.14.54:81/TCP\nI1007 00:01:50.048556       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:50.185060       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"136.930607ms\"\nI1007 00:01:50.316655       1 service.go:301] Service services-9062/hairpin-test updated: 0 ports\nI1007 00:01:51.185830       1 service.go:441] Removing service port \"services-9062/hairpin-test\"\nI1007 00:01:51.186173       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:51.308025       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"122.194318ms\"\nI1007 00:01:52.807298       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:52.927229       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"120.042475ms\"\nI1007 00:01:56.448955       1 service.go:301] Service services-1219/nodeport-test updated: 1 ports\nI1007 00:01:56.449019       1 service.go:416] Adding new service port \"services-1219/nodeport-test:http\" at 100.68.145.78:80/TCP\nI1007 00:01:56.449688       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:56.512818       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-1219/nodeport-test:http\\\" (:31620/tcp4)\"\nI1007 00:01:56.536624       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"87.591923ms\"\nI1007 00:01:56.536797       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:56.616985       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"80.29647ms\"\nI1007 00:01:57.544316       1 service.go:301] Service aggregator-242/sample-api updated: 1 ports\nI1007 00:01:57.544380       1 service.go:416] Adding new service port \"aggregator-242/sample-api\" at 100.69.214.245:7443/TCP\nI1007 00:01:57.544527       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:57.709659       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"165.265414ms\"\nI1007 00:01:58.711444       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:58.844770       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"133.454487ms\"\nI1007 00:01:59.846267       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:59.918114       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"71.976211ms\"\nI1007 00:02:02.614700       1 service.go:301] Service conntrack-7711/boom-server updated: 1 ports\nI1007 00:02:02.614965       1 service.go:416] Adding new service port \"conntrack-7711/boom-server\" at 100.69.77.194:9000/TCP\nI1007 00:02:02.615088       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:02.757508       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"142.533585ms\"\nI1007 00:02:02.758145       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:02.848942       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"90.96646ms\"\nI1007 00:02:05.198683       1 proxier.go:829] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-2227/svc-udp:udp\" clusterIP=\"100.69.243.183\"\nI1007 00:02:05.199003       1 proxier.go:839] Stale udp service NodePort conntrack-2227/svc-udp:udp -> 31123\nI1007 00:02:05.199638       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:05.287686       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"89.187829ms\"\nI1007 00:02:06.838189       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:06.889357       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"51.287106ms\"\nI1007 00:02:06.995339       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:07.053483       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"58.275455ms\"\nI1007 00:02:07.229255       1 service.go:301] Service services-6611/multi-endpoint-test updated: 0 ports\nI1007 00:02:08.054492       1 service.go:441] Removing service port \"services-6611/multi-endpoint-test:portname1\"\nI1007 00:02:08.054628       1 service.go:441] Removing service port \"services-6611/multi-endpoint-test:portname2\"\nI1007 00:02:08.054819       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:08.127081       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"72.587798ms\"\nI1007 00:02:11.056398       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:11.123566       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"67.285719ms\"\nI1007 00:02:13.722165       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:13.757167       1 service.go:301] Service aggregator-242/sample-api updated: 0 ports\nI1007 00:02:13.813354       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"91.303931ms\"\nI1007 00:02:13.813418       1 service.go:441] Removing service port \"aggregator-242/sample-api\"\nI1007 00:02:13.813722       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:13.892131       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"78.683046ms\"\nI1007 00:02:14.893749       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:15.083650       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"190.049708ms\"\nI1007 00:02:21.196262       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:21.252073       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"55.938035ms\"\nI1007 00:02:21.523656       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:21.620119       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"96.573579ms\"\nI1007 00:02:22.620917       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:22.680888       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"60.081215ms\"\nI1007 00:02:23.911638       1 service.go:301] Service webhook-7530/e2e-test-webhook updated: 1 ports\nI1007 00:02:23.911862       1 service.go:416] Adding new service port \"webhook-7530/e2e-test-webhook\" at 100.68.253.240:8443/TCP\nI1007 00:02:23.912126       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:23.960271       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.5649ms\"\nI1007 00:02:24.883420       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:25.049017       1 service.go:301] Service services-1219/nodeport-test updated: 0 ports\nI1007 00:02:25.097992       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"214.695381ms\"\nI1007 00:02:25.526597       1 service.go:301] Service webhook-7530/e2e-test-webhook updated: 0 ports\nI1007 00:02:25.526647       1 service.go:441] Removing service port \"services-1219/nodeport-test:http\"\nI1007 00:02:25.526667       1 service.go:441] Removing service port \"webhook-7530/e2e-test-webhook\"\nI1007 00:02:25.527515       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:25.664089       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"137.415681ms\"\nI1007 00:02:26.664405       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:26.758113       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"93.845415ms\"\nI1007 00:02:36.622233       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:36.625887       1 service.go:301] Service services-7165/nodeport-service updated: 1 ports\nI1007 00:02:36.650023       1 service.go:301] Service services-7165/externalsvc updated: 1 ports\nI1007 00:02:36.700099       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"77.96221ms\"\nI1007 00:02:36.701150       1 service.go:416] Adding new service port \"services-7165/nodeport-service\" at 100.69.72.84:80/TCP\nI1007 00:02:36.701286       1 service.go:416] Adding new service port \"services-7165/externalsvc\" at 100.64.148.54:80/TCP\nI1007 00:02:36.701595       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:36.765867       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-7165/nodeport-service\\\" (:30607/tcp4)\"\nI1007 00:02:36.771711       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"71.449414ms\"\nI1007 00:02:37.349764       1 service.go:301] Service conntrack-2227/svc-udp updated: 0 ports\nI1007 00:02:37.772100       1 service.go:441] Removing service port \"conntrack-2227/svc-udp:udp\"\nI1007 00:02:37.772384       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:37.855402       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"83.324487ms\"\nI1007 00:02:45.600668       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:45.664664       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"64.106414ms\"\nI1007 00:02:45.716171       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:45.802491       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"86.43105ms\"\nI1007 00:02:45.815365       1 service.go:301] Service services-7165/nodeport-service updated: 0 ports\nI1007 00:02:46.803395       1 service.go:441] Removing service port \"services-7165/nodeport-service\"\nI1007 00:02:46.803716       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:46.870584       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"67.185859ms\"\nI1007 00:02:54.025508       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:54.126171       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"101.105779ms\"\nI1007 00:02:54.126514       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:54.241299       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"115.066063ms\"\nI1007 00:02:55.242754       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:55.330538       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"87.904664ms\"\nI1007 00:02:58.646260       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:58.747271       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"101.146758ms\"\nI1007 00:02:58.747588       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:58.812532       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"65.028508ms\"\nI1007 00:03:00.854533       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:01.006844       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"152.427336ms\"\nI1007 00:03:01.007221       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:01.109333       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"102.406451ms\"\nI1007 00:03:05.403412       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:05.453995       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.72132ms\"\nI1007 00:03:05.611262       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:05.659496       1 service.go:301] Service services-7165/externalsvc updated: 0 ports\nI1007 00:03:05.728382       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"117.225662ms\"\nI1007 00:03:06.730996       1 service.go:441] Removing service port \"services-7165/externalsvc\"\nI1007 00:03:06.731159       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:06.795981       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"64.981355ms\"\nI1007 00:03:16.035459       1 service.go:301] Service conntrack-7711/boom-server updated: 0 ports\nI1007 00:03:16.035622       1 service.go:441] Removing service port \"conntrack-7711/boom-server\"\nI1007 00:03:16.035863       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:16.137103       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"101.461118ms\"\nI1007 00:03:16.137428       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:16.234364       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"97.123938ms\"\nI1007 00:03:24.740671       1 service.go:301] Service services-3278/sourceip-test updated: 1 ports\nI1007 00:03:24.740733       1 service.go:416] Adding new service port \"services-3278/sourceip-test\" at 100.68.160.200:8080/TCP\nI1007 00:03:24.741065       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:24.835375       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"94.628874ms\"\nI1007 00:03:24.835521       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:25.027142       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"191.709969ms\"\nI1007 00:03:30.602089       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:30.703079       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"101.097788ms\"\nI1007 00:03:32.075409       1 service.go:301] Service deployment-6782/test-rolling-update-with-lb updated: 0 ports\nI1007 00:03:32.075846       1 service.go:441] Removing service port \"deployment-6782/test-rolling-update-with-lb\"\nI1007 00:03:32.076114       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:32.130354       1 service_health.go:83] Closing healthcheck \"deployment-6782/test-rolling-update-with-lb\" on port 30696\nI1007 00:03:32.130564       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"54.660981ms\"\nI1007 00:03:47.903290       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:47.934831       1 service.go:301] Service services-3278/sourceip-test updated: 0 ports\nI1007 00:03:48.002723       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"99.532783ms\"\nI1007 00:03:48.002775       1 service.go:441] Removing service port \"services-3278/sourceip-test\"\nI1007 00:03:48.003131       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:48.084882       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"82.093001ms\"\nI1007 00:03:58.721000       1 service.go:301] Service crd-webhook-7102/e2e-test-crd-conversion-webhook updated: 1 ports\nI1007 00:03:58.721226       1 service.go:416] Adding new service port \"crd-webhook-7102/e2e-test-crd-conversion-webhook\" at 100.67.215.194:9443/TCP\nI1007 00:03:58.721467       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:58.787056       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"65.823985ms\"\nI1007 00:03:58.787403       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:03:58.851628       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"64.423443ms\"\nI1007 00:04:02.943512       1 service.go:301] Service crd-webhook-7102/e2e-test-crd-conversion-webhook updated: 0 ports\nI1007 00:04:02.943756       1 service.go:441] Removing service port \"crd-webhook-7102/e2e-test-crd-conversion-webhook\"\nI1007 00:04:02.944069       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:04:03.115895       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"172.120416ms\"\nI1007 00:04:03.117223       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:04:03.255815       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"139.85162ms\"\nI1007 00:04:06.382132       1 service.go:301] Service dns-7567/test-service-2 updated: 1 ports\nI1007 00:04:06.382186       1 service.go:416] Adding new service port \"dns-7567/test-service-2:http\" at 100.64.176.123:80/TCP\nI1007 00:04:06.382301       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:04:06.473965       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"91.771501ms\"\nI1007 00:04:06.474282       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:04:06.523388       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.178471ms\"\nI1007 00:04:08.464563       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:04:08.654462       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"190.0108ms\"\nI1007 00:04:14.264058       1 service.go:301] Service webhook-1335/e2e-test-webhook updated: 1 ports\nI1007 00:04:14.264124       1 service.go:416] Adding new service port \"webhook-1335/e2e-test-webhook\" at 100.64.198.56:8443/TCP\nI1007 00:04:14.264896       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:04:14.314480       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.350071ms\"\nI1007 00:04:14.314777       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:04:14.370820       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"56.200928ms\"\nI1007 00:04:15.273250       1 service.go:301] Service pods-8701/fooservice updated: 1 ports\nI1007 00:04:15.273684       1 service.go:416] Adding new service port \"pods-8701/fooservice\" at 100.68.194.96:8765/TCP\nI1007 00:04:15.274004       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:04:15.402056       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"128.368444ms\"\nI1007 00:04:15.830184       1 service.go:301] Service webhook-1335/e2e-test-webhook updated: 0 ports\nI1007 00:04:16.404135       1 service.go:441] Removing service port \"webhook-1335/e2e-test-webhook\"\nI1007 00:04:16.404367       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:04:16.538375       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"134.243415ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-master-us-west3-a-8lvv ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-nodes-us-west3-a-87xh ====\nI1006 23:44:29.784995       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI1006 23:44:29.785236       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI1006 23:44:29.785245       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI1006 23:44:29.785253       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI1006 23:44:29.785259       1 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI1006 23:44:29.785265       1 flags.go:59] FLAG: --cleanup=\"false\"\nI1006 23:44:29.785269       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI1006 23:44:29.785275       1 flags.go:59] FLAG: --config=\"\"\nI1006 23:44:29.785279       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI1006 23:44:29.785286       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI1006 23:44:29.785292       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI1006 23:44:29.785297       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI1006 23:44:29.785302       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI1006 23:44:29.785307       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI1006 23:44:29.785313       1 flags.go:59] FLAG: --feature-gates=\"\"\nI1006 23:44:29.785321       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI1006 23:44:29.785328       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI1006 23:44:29.785333       1 flags.go:59] FLAG: --help=\"false\"\nI1006 23:44:29.785339       1 flags.go:59] FLAG: --hostname-override=\"\"\nI1006 23:44:29.785343       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI1006 23:44:29.785348       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI1006 23:44:29.785354       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI1006 23:44:29.785359       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI1006 23:44:29.785377       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI1006 23:44:29.785382       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI1006 23:44:29.785387       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI1006 23:44:29.785392       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI1006 23:44:29.785397       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI1006 23:44:29.785401       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI1006 23:44:29.785406       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI1006 23:44:29.785410       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI1006 23:44:29.785415       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI1006 23:44:29.785420       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI1006 23:44:29.785427       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI1006 23:44:29.785433       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI1006 23:44:29.785446       1 flags.go:59] FLAG: --log-dir=\"\"\nI1006 23:44:29.785452       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI1006 23:44:29.785457       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI1006 23:44:29.785462       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI1006 23:44:29.785467       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI1006 23:44:29.785472       1 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI1006 23:44:29.785478       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI1006 23:44:29.785482       1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local\"\nI1006 23:44:29.785489       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI1006 23:44:29.785494       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI1006 23:44:29.785499       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI1006 23:44:29.785514       1 flags.go:59] FLAG: --one-output=\"false\"\nI1006 23:44:29.785519       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI1006 23:44:29.785524       1 flags.go:59] FLAG: --profiling=\"false\"\nI1006 23:44:29.785529       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI1006 23:44:29.785536       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI1006 23:44:29.785545       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI1006 23:44:29.785550       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI1006 23:44:29.785558       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI1006 23:44:29.785563       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI1006 23:44:29.785568       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI1006 23:44:29.785574       1 flags.go:59] FLAG: --v=\"2\"\nI1006 23:44:29.785580       1 flags.go:59] FLAG: --version=\"false\"\nI1006 23:44:29.785588       1 flags.go:59] FLAG: --vmodule=\"\"\nI1006 23:44:29.785594       1 flags.go:59] FLAG: --write-config-to=\"\"\nW1006 23:44:29.785615       1 server.go:224] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI1006 23:44:29.786129       1 feature_gate.go:245] feature gates: &{map[]}\nI1006 23:44:29.786401       1 feature_gate.go:245] feature gates: &{map[]}\nE1006 23:44:29.940334       1 node.go:161] Failed to retrieve node info: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/nodes/nodes-us-west3-a-87xh\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:44:31.033573       1 node.go:161] Failed to retrieve node info: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/nodes/nodes-us-west3-a-87xh\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:44:33.411578       1 node.go:161] Failed to retrieve node info: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/nodes/nodes-us-west3-a-87xh\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:44:38.175162       1 node.go:161] Failed to retrieve node info: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/nodes/nodes-us-west3-a-87xh\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:44:46.548495       1 node.go:161] Failed to retrieve node info: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/nodes/nodes-us-west3-a-87xh\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:04.828879       1 node.go:161] Failed to retrieve node info: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/nodes/nodes-us-west3-a-87xh\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nI1006 23:45:04.828922       1 server.go:836] can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag\nI1006 23:45:04.828943       1 server_others.go:140] Detected node IP 127.0.0.1\nW1006 23:45:04.828989       1 server_others.go:565] Unknown proxy mode \"\", assuming iptables proxy\nI1006 23:45:04.829113       1 server_others.go:177] DetectLocalMode: 'ClusterCIDR'\nI1006 23:45:04.866873       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary\nI1006 23:45:04.866924       1 server_others.go:212] Using iptables Proxier.\nI1006 23:45:04.866938       1 server_others.go:219] creating dualStackProxier for iptables.\nW1006 23:45:04.866961       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\nI1006 23:45:04.867059       1 utils.go:370] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI1006 23:45:04.867159       1 proxier.go:281] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI1006 23:45:04.867208       1 proxier.go:327] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1006 23:45:04.867245       1 proxier.go:337] \"Iptables supports --random-fully\" ipFamily=IPv4\nI1006 23:45:04.867316       1 proxier.go:281] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI1006 23:45:04.867350       1 proxier.go:327] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1006 23:45:04.867369       1 proxier.go:337] \"Iptables supports --random-fully\" ipFamily=IPv6\nI1006 23:45:04.867575       1 server.go:649] Version: v1.22.2\nI1006 23:45:04.869381       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1006 23:45:04.870524       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1006 23:45:04.870747       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1006 23:45:04.873121       1 config.go:315] Starting service config controller\nI1006 23:45:04.873145       1 shared_informer.go:240] Waiting for caches to sync for service config\nI1006 23:45:04.873372       1 config.go:224] Starting endpoint slice config controller\nI1006 23:45:04.873389       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nE1006 23:45:04.876777       1 event_broadcaster.go:262] Unable to write event: 'Post \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host' (may retry after sleeping)\nE1006 23:45:04.877041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:04.877290       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:05.852859       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:06.121084       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:08.385558       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:08.954358       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:12.883760       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:14.323981       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:15.478723       1 event_broadcaster.go:262] Unable to write event: 'Post \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host' (may retry after sleeping)\nE1006 23:45:21.246327       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:24.256619       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:27.230151       1 event_broadcaster.go:262] Unable to write event: 'Post \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host' (may retry after sleeping)\nE1006 23:45:39.092512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:39.451555       1 event_broadcaster.go:262] Unable to write event: 'Post \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host' (may retry after sleeping)\nE1006 23:45:42.735329       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:45:51.797319       1 event_broadcaster.go:262] Unable to write event: 'Post \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host' (may retry after sleeping)\nE1006 23:46:03.013158       1 event_broadcaster.go:262] Unable to write event: 'Post \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host' (may retry after sleeping)\nE1006 23:46:14.419158       1 event_broadcaster.go:262] Unable to write event: 'Post \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host' (may retry after sleeping)\nE1006 23:46:15.210637       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:46:23.767784       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host\nE1006 23:46:25.232453       1 event_broadcaster.go:262] Unable to write event: 'Post \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host' (may retry after sleeping)\nE1006 23:46:35.907445       1 event_broadcaster.go:262] Unable to write event: 'Post \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host' (may retry after sleeping)\nE1006 23:46:46.211197       1 event_broadcaster.go:262] Unable to write event: 'Post \"https://api.internal.e2e-4e8fce5b36-0a91d.k8s.local/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp: lookup api.internal.e2e-4e8fce5b36-0a91d.k8s.local on 169.254.169.254:53: no such host' (may retry after sleeping)\nE1006 23:46:56.917457       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"nodes-us-west3-a-87xh.16ab95b76bc11b01\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc04fab2033ef0e1c, ext:35112167672, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:\"kube-proxy\", ReportingInstance:\"kube-proxy-nodes-us-west3-a-87xh\", Action:\"StartKubeProxy\", Reason:\"Starting\", Regarding:v1.ObjectReference{Kind:\"Node\", Namespace:\"\", Name:\"nodes-us-west3-a-87xh\", UID:\"nodes-us-west3-a-87xh\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}, Related:(*v1.ObjectReference)(nil), Note:\"\", Type:\"Normal\", DeprecatedSource:v1.EventSource{Component:\"\", Host:\"\"}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event \"nodes-us-west3-a-87xh.16ab95b76bc11b01\" is invalid: involvedObject.namespace: Invalid value: \"\": does not match event.namespace' (will not retry!)\nI1006 23:47:07.274037       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI1006 23:47:07.274187       1 proxier.go:804] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1006 23:47:07.274266       1 proxier.go:804] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1006 23:47:19.099824       1 service.go:301] Service default/kubernetes updated: 1 ports\nI1006 23:47:19.099888       1 service.go:301] Service kube-system/kube-dns updated: 3 ports\nI1006 23:47:19.174212       1 shared_informer.go:247] Caches are synced for service config \nI1006 23:47:19.174345       1 service.go:416] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI1006 23:47:19.174384       1 service.go:416] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI1006 23:47:19.174402       1 service.go:416] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI1006 23:47:19.174413       1 service.go:416] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI1006 23:47:19.174569       1 proxier.go:829] \"Stale service\" protocol=\"udp\" svcPortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI1006 23:47:19.174604       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:47:19.233216       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"58.838368ms\"\nI1006 23:47:19.233424       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:47:19.277214       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"43.95726ms\"\nI1006 23:47:47.594179       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:47:47.639406       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.357793ms\"\nI1006 23:47:47.639516       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:47:47.682078       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"42.628644ms\"\nI1006 23:47:48.682337       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:47:48.731847       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.59963ms\"\nI1006 23:51:15.234782       1 service.go:301] Service services-8739/externalip-test updated: 1 ports\nI1006 23:51:15.234908       1 service.go:416] Adding new service port \"services-8739/externalip-test:http\" at 100.67.78.135:80/TCP\nI1006 23:51:15.234961       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:15.283747       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.849152ms\"\nI1006 23:51:15.283825       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:15.324418       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"40.601336ms\"\nI1006 23:51:23.573806       1 service.go:301] Service proxy-6177/test-service updated: 1 ports\nI1006 23:51:23.573860       1 service.go:416] Adding new service port \"proxy-6177/test-service\" at 100.67.154.127:80/TCP\nI1006 23:51:23.573921       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:23.635144       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"61.273845ms\"\nI1006 23:51:23.635240       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:23.699108       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"63.90612ms\"\nI1006 23:51:24.699509       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:24.749830       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.395008ms\"\nI1006 23:51:26.779592       1 service.go:301] Service webhook-5953/e2e-test-webhook updated: 1 ports\nI1006 23:51:26.779657       1 service.go:416] Adding new service port \"webhook-5953/e2e-test-webhook\" at 100.68.31.7:8443/TCP\nI1006 23:51:26.779703       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:26.820470       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"40.810164ms\"\nI1006 23:51:26.820682       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:26.863184       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"42.673802ms\"\nI1006 23:51:28.165127       1 service.go:301] Service webhook-5953/e2e-test-webhook updated: 0 ports\nI1006 23:51:28.165383       1 service.go:441] Removing service port \"webhook-5953/e2e-test-webhook\"\nI1006 23:51:28.165440       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:28.242551       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"77.158509ms\"\nI1006 23:51:29.140674       1 service.go:301] Service proxy-6177/test-service updated: 0 ports\nI1006 23:51:29.140713       1 service.go:441] Removing service port \"proxy-6177/test-service\"\nI1006 23:51:29.140778       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:29.257662       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"116.93281ms\"\nI1006 23:51:30.258187       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:30.309295       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"51.17493ms\"\nI1006 23:51:32.467662       1 service.go:301] Service webhook-4411/e2e-test-webhook updated: 1 ports\nI1006 23:51:32.467726       1 service.go:416] Adding new service port \"webhook-4411/e2e-test-webhook\" at 100.70.35.148:8443/TCP\nI1006 23:51:32.467774       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:32.513698       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.964733ms\"\nI1006 23:51:32.513937       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:32.576703       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"62.948527ms\"\nI1006 23:51:40.793957       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:40.864827       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"70.945159ms\"\nI1006 23:51:40.864934       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:40.930074       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"65.197317ms\"\nI1006 23:51:41.160814       1 service.go:301] Service services-8739/externalip-test updated: 0 ports\nI1006 23:51:41.930246       1 service.go:441] Removing service port \"services-8739/externalip-test:http\"\nI1006 23:51:41.930360       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:41.986264       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"56.014939ms\"\nI1006 23:51:45.120536       1 service.go:301] Service webhook-4411/e2e-test-webhook updated: 0 ports\nI1006 23:51:45.120588       1 service.go:441] Removing service port \"webhook-4411/e2e-test-webhook\"\nI1006 23:51:45.120632       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:45.173251       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"52.644901ms\"\nI1006 23:51:45.173343       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:51:45.231289       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"57.988908ms\"\nI1006 23:52:59.657355       1 service.go:301] Service endpointslice-7758/example-int-port updated: 1 ports\nI1006 23:52:59.657731       1 service.go:416] Adding new service port \"endpointslice-7758/example-int-port:example\" at 100.64.238.3:80/TCP\nI1006 23:52:59.658026       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:52:59.710959       1 service.go:301] Service endpointslice-7758/example-named-port updated: 1 ports\nI1006 23:52:59.721707       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"63.984491ms\"\nI1006 23:52:59.721750       1 service.go:416] Adding new service port \"endpointslice-7758/example-named-port:http\" at 100.68.250.7:80/TCP\nI1006 23:52:59.721807       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:52:59.748008       1 service.go:301] Service endpointslice-7758/example-no-match updated: 1 ports\nI1006 23:52:59.775176       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"53.405134ms\"\nI1006 23:53:00.775437       1 service.go:416] Adding new service port \"endpointslice-7758/example-no-match:example-no-match\" at 100.67.114.223:80/TCP\nI1006 23:53:00.775595       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:00.828329       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"52.916198ms\"\nI1006 23:53:02.169131       1 service.go:301] Service kubectl-8701/agnhost-replica updated: 1 ports\nI1006 23:53:02.169183       1 service.go:416] Adding new service port \"kubectl-8701/agnhost-replica\" at 100.65.55.244:6379/TCP\nI1006 23:53:02.169230       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:02.240652       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"71.466262ms\"\nI1006 23:53:02.442774       1 service.go:301] Service kubectl-8701/agnhost-primary updated: 1 ports\nI1006 23:53:02.716047       1 service.go:301] Service kubectl-8701/frontend updated: 1 ports\nI1006 23:53:02.716111       1 service.go:416] Adding new service port \"kubectl-8701/frontend\" at 100.65.231.157:80/TCP\nI1006 23:53:02.716133       1 service.go:416] Adding new service port \"kubectl-8701/agnhost-primary\" at 100.68.97.17:6379/TCP\nI1006 23:53:02.716189       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:02.764170       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.057554ms\"\nI1006 23:53:03.765051       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:03.859366       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"94.380644ms\"\nI1006 23:53:05.264888       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:05.330776       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"65.938393ms\"\nI1006 23:53:10.070768       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:10.181875       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"111.151818ms\"\nI1006 23:53:10.181967       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:10.312690       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"130.762848ms\"\nI1006 23:53:11.078352       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:11.135759       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"57.480175ms\"\nI1006 23:53:15.452321       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:15.505384       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"53.107664ms\"\nI1006 23:53:16.106343       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:16.168851       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"62.568246ms\"\nI1006 23:53:16.687987       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:16.750877       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"62.961021ms\"\nI1006 23:53:17.500600       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:17.585913       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"85.379581ms\"\nI1006 23:53:18.834063       1 service.go:301] Service kubectl-8701/agnhost-replica updated: 0 ports\nI1006 23:53:18.834102       1 service.go:441] Removing service port \"kubectl-8701/agnhost-replica\"\nI1006 23:53:18.834153       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:18.932632       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"98.50668ms\"\nI1006 23:53:19.012681       1 service.go:301] Service kubectl-8701/agnhost-primary updated: 0 ports\nI1006 23:53:19.186589       1 service.go:301] Service kubectl-8701/frontend updated: 0 ports\nI1006 23:53:19.935189       1 service.go:441] Removing service port \"kubectl-8701/agnhost-primary\"\nI1006 23:53:19.935240       1 service.go:441] Removing service port \"kubectl-8701/frontend\"\nI1006 23:53:19.935369       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:19.968908       1 service.go:301] Service endpointslicemirroring-8785/example-custom-endpoints updated: 1 ports\nI1006 23:53:19.998070       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"62.877105ms\"\nI1006 23:53:20.998228       1 service.go:416] Adding new service port \"endpointslicemirroring-8785/example-custom-endpoints:example\" at 100.68.11.80:80/TCP\nI1006 23:53:20.998313       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:21.056007       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"57.797847ms\"\nI1006 23:53:25.212463       1 service.go:301] Service endpointslicemirroring-8785/example-custom-endpoints updated: 0 ports\nI1006 23:53:25.212512       1 service.go:441] Removing service port \"endpointslicemirroring-8785/example-custom-endpoints:example\"\nI1006 23:53:25.212571       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:25.284178       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"71.644086ms\"\nI1006 23:53:29.988200       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:30.056108       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"67.969151ms\"\nI1006 23:53:30.056281       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:30.113102       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"56.866436ms\"\nI1006 23:53:30.994532       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:31.041949       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.462981ms\"\nI1006 23:53:32.042164       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:32.097214       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"55.124129ms\"\nI1006 23:53:45.339646       1 service.go:301] Service endpointslice-7758/example-int-port updated: 0 ports\nI1006 23:53:45.339695       1 service.go:441] Removing service port \"endpointslice-7758/example-int-port:example\"\nI1006 23:53:45.339754       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:45.387190       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.481733ms\"\nI1006 23:53:45.387581       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:45.396753       1 service.go:301] Service endpointslice-7758/example-named-port updated: 0 ports\nI1006 23:53:45.435000       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.655254ms\"\nI1006 23:53:45.440170       1 service.go:301] Service endpointslice-7758/example-no-match updated: 0 ports\nI1006 23:53:46.438471       1 service.go:441] Removing service port \"endpointslice-7758/example-named-port:http\"\nI1006 23:53:46.438514       1 service.go:441] Removing service port \"endpointslice-7758/example-no-match:example-no-match\"\nI1006 23:53:46.438625       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:46.520644       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"82.155745ms\"\nI1006 23:53:48.661332       1 service.go:301] Service services-1726/up-down-1 updated: 1 ports\nI1006 23:53:48.661381       1 service.go:416] Adding new service port \"services-1726/up-down-1\" at 100.68.148.195:80/TCP\nI1006 23:53:48.661428       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:48.915221       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"253.827178ms\"\nI1006 23:53:48.915314       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:48.981149       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"65.86567ms\"\nI1006 23:53:54.124733       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:54.198855       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"74.189475ms\"\nI1006 23:53:57.919694       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:53:57.967914       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.292359ms\"\nI1006 23:54:00.252918       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:00.329402       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"76.539427ms\"\nI1006 23:54:00.806023       1 service.go:301] Service services-1726/up-down-2 updated: 1 ports\nI1006 23:54:00.806081       1 service.go:416] Adding new service port \"services-1726/up-down-2\" at 100.71.60.212:80/TCP\nI1006 23:54:00.806138       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:00.868757       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"62.663375ms\"\nI1006 23:54:01.869650       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:01.928556       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"58.961317ms\"\nI1006 23:54:02.067567       1 service.go:301] Service webhook-4014/e2e-test-webhook updated: 1 ports\nI1006 23:54:02.932495       1 service.go:416] Adding new service port \"webhook-4014/e2e-test-webhook\" at 100.67.21.86:8443/TCP\nI1006 23:54:02.932844       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:03.001646       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"69.167233ms\"\nI1006 23:54:03.565332       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:03.654108       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"88.832057ms\"\nI1006 23:54:03.886830       1 service.go:301] Service webhook-4014/e2e-test-webhook updated: 0 ports\nI1006 23:54:04.656326       1 service.go:441] Removing service port \"webhook-4014/e2e-test-webhook\"\nI1006 23:54:04.656580       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:04.706611       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.286576ms\"\nI1006 23:54:06.849347       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:06.904224       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"54.942314ms\"\nI1006 23:54:07.315910       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:07.386532       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"70.678812ms\"\nI1006 23:54:25.511571       1 service.go:301] Service services-9883/endpoint-test2 updated: 1 ports\nI1006 23:54:25.511645       1 service.go:416] Adding new service port \"services-9883/endpoint-test2\" at 100.69.248.218:80/TCP\nI1006 23:54:25.511715       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:25.572389       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"60.736044ms\"\nI1006 23:54:25.572522       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:25.636517       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"64.026006ms\"\nI1006 23:54:27.983099       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:28.029615       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.570202ms\"\nI1006 23:54:30.905442       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:30.953755       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.384315ms\"\nI1006 23:54:30.953997       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:30.998678       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"44.874219ms\"\nI1006 23:54:33.124241       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:33.173845       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.673984ms\"\nI1006 23:54:35.848344       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:35.896015       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.728034ms\"\nI1006 23:54:36.257079       1 service.go:301] Service dns-991/dns-test-service-3 updated: 1 ports\nI1006 23:54:36.257130       1 service.go:416] Adding new service port \"dns-991/dns-test-service-3:http\" at 100.64.198.15:80/TCP\nI1006 23:54:36.257186       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:36.319107       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"61.97161ms\"\nI1006 23:54:36.606565       1 service.go:301] Service services-1726/up-down-1 updated: 0 ports\nI1006 23:54:37.319442       1 service.go:441] Removing service port \"services-1726/up-down-1\"\nI1006 23:54:37.319704       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:37.381188       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"61.746206ms\"\nI1006 23:54:43.770270       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:43.827588       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"57.375414ms\"\nI1006 23:54:47.031019       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:47.079961       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.04291ms\"\nI1006 23:54:48.074278       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:48.136710       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"62.491898ms\"\nI1006 23:54:50.531322       1 service.go:301] Service dns-991/dns-test-service-3 updated: 0 ports\nI1006 23:54:50.531369       1 service.go:441] Removing service port \"dns-991/dns-test-service-3:http\"\nI1006 23:54:50.531429       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:50.608197       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"76.795331ms\"\nI1006 23:54:50.612297       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:50.675767       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"63.517809ms\"\nI1006 23:54:50.725947       1 service.go:301] Service services-9883/endpoint-test2 updated: 0 ports\nI1006 23:54:51.675952       1 service.go:441] Removing service port \"services-9883/endpoint-test2\"\nI1006 23:54:51.676074       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:51.728074       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"52.128735ms\"\nI1006 23:54:56.230363       1 service.go:301] Service webhook-727/e2e-test-webhook updated: 1 ports\nI1006 23:54:56.232069       1 service.go:416] Adding new service port \"webhook-727/e2e-test-webhook\" at 100.65.14.240:8443/TCP\nI1006 23:54:56.232129       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:56.326219       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"95.786061ms\"\nI1006 23:54:56.326327       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:56.426828       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"100.549725ms\"\nI1006 23:54:57.169546       1 service.go:301] Service services-1726/up-down-3 updated: 1 ports\nI1006 23:54:57.427990       1 service.go:416] Adding new service port \"services-1726/up-down-3\" at 100.64.51.255:80/TCP\nI1006 23:54:57.428090       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:57.474633       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.768259ms\"\nI1006 23:54:59.682874       1 service.go:301] Service webhook-727/e2e-test-webhook updated: 0 ports\nI1006 23:54:59.682929       1 service.go:441] Removing service port \"webhook-727/e2e-test-webhook\"\nI1006 23:54:59.682989       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:59.816336       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"133.396797ms\"\nI1006 23:54:59.816451       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:54:59.909308       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"92.921094ms\"\nI1006 23:55:03.299247       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:03.361289       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"62.093756ms\"\nI1006 23:55:07.053508       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:07.143997       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"90.547113ms\"\nI1006 23:55:13.664468       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:13.736677       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"72.305352ms\"\nI1006 23:55:20.820491       1 service.go:301] Service services-7749/externalname-service updated: 1 ports\nI1006 23:55:20.820545       1 service.go:416] Adding new service port \"services-7749/externalname-service:http\" at 100.70.153.1:80/TCP\nI1006 23:55:20.820603       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:20.872055       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-7749/externalname-service:http\\\" (:31410/tcp4)\"\nI1006 23:55:20.879016       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"58.462799ms\"\nI1006 23:55:20.879101       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:20.940852       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"61.78377ms\"\nI1006 23:55:25.041121       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:25.099916       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"58.845044ms\"\nI1006 23:55:30.354325       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:30.421265       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"67.022456ms\"\nI1006 23:55:43.922268       1 service.go:301] Service webhook-9881/e2e-test-webhook updated: 1 ports\nI1006 23:55:43.922464       1 service.go:416] Adding new service port \"webhook-9881/e2e-test-webhook\" at 100.64.150.118:8443/TCP\nI1006 23:55:43.922552       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:43.969533       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.201927ms\"\nI1006 23:55:43.969626       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:44.014048       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"44.471327ms\"\nI1006 23:55:45.161090       1 service.go:301] Service webhook-9881/e2e-test-webhook updated: 0 ports\nI1006 23:55:45.161142       1 service.go:441] Removing service port \"webhook-9881/e2e-test-webhook\"\nI1006 23:55:45.161204       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:45.226652       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"65.49345ms\"\nI1006 23:55:46.226866       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:46.273890       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.086491ms\"\nI1006 23:55:47.801875       1 service.go:301] Service services-7749/externalname-service updated: 0 ports\nI1006 23:55:47.801925       1 service.go:441] Removing service port \"services-7749/externalname-service:http\"\nI1006 23:55:47.801988       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:47.862679       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"60.73841ms\"\nI1006 23:55:48.862917       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:48.911276       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.436835ms\"\nI1006 23:55:49.185311       1 service.go:301] Service crd-webhook-1698/e2e-test-crd-conversion-webhook updated: 1 ports\nI1006 23:55:49.185364       1 service.go:416] Adding new service port \"crd-webhook-1698/e2e-test-crd-conversion-webhook\" at 100.69.194.32:9443/TCP\nI1006 23:55:49.185423       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:49.236498       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"51.127594ms\"\nI1006 23:55:50.237672       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:50.284018       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.422735ms\"\nI1006 23:55:53.937639       1 service.go:301] Service crd-webhook-1698/e2e-test-crd-conversion-webhook updated: 0 ports\nI1006 23:55:53.937677       1 service.go:441] Removing service port \"crd-webhook-1698/e2e-test-crd-conversion-webhook\"\nI1006 23:55:53.937735       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:54.029546       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"91.837319ms\"\nI1006 23:55:54.029805       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:55:54.102562       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"72.964005ms\"\nI1006 23:56:00.853938       1 service.go:301] Service services-1726/up-down-2 updated: 0 ports\nI1006 23:56:00.854003       1 service.go:441] Removing service port \"services-1726/up-down-2\"\nI1006 23:56:00.854283       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:56:00.881786       1 service.go:301] Service services-1726/up-down-3 updated: 0 ports\nI1006 23:56:00.918728       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"64.709291ms\"\nI1006 23:56:00.918768       1 service.go:441] Removing service port \"services-1726/up-down-3\"\nI1006 23:56:00.919023       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:56:00.986225       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"67.417661ms\"\nI1006 23:56:48.991503       1 service.go:301] Service endpointslice-6509/example-empty-selector updated: 1 ports\nI1006 23:56:48.991598       1 service.go:416] Adding new service port \"endpointslice-6509/example-empty-selector:example\" at 100.69.55.127:80/TCP\nI1006 23:56:48.991662       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:56:49.038269       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.67203ms\"\nI1006 23:56:49.038514       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:56:49.078683       1 service.go:301] Service endpointslice-6509/example-empty-selector updated: 0 ports\nI1006 23:56:49.083649       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.338382ms\"\nI1006 23:56:50.084576       1 service.go:441] Removing service port \"endpointslice-6509/example-empty-selector:example\"\nI1006 23:56:50.084663       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:56:50.149734       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"65.161661ms\"\nI1006 23:57:04.589806       1 service.go:301] Service resourcequota-7181/test-service updated: 1 ports\nI1006 23:57:04.589859       1 service.go:416] Adding new service port \"resourcequota-7181/test-service\" at 100.64.139.129:80/TCP\nI1006 23:57:04.590066       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:04.635910       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.042328ms\"\nI1006 23:57:04.642154       1 service.go:301] Service resourcequota-7181/test-service-np updated: 1 ports\nI1006 23:57:04.642356       1 service.go:416] Adding new service port \"resourcequota-7181/test-service-np\" at 100.67.183.208:80/TCP\nI1006 23:57:04.642424       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:04.682991       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for resourcequota-7181/test-service-np\\\" (:30162/tcp4)\"\nI1006 23:57:04.689190       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.970559ms\"\nI1006 23:57:06.755967       1 service.go:301] Service resourcequota-7181/test-service updated: 0 ports\nI1006 23:57:06.756017       1 service.go:441] Removing service port \"resourcequota-7181/test-service\"\nI1006 23:57:06.756073       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:06.800850       1 service.go:301] Service resourcequota-7181/test-service-np updated: 0 ports\nI1006 23:57:06.815070       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"59.045251ms\"\nI1006 23:57:06.815106       1 service.go:441] Removing service port \"resourcequota-7181/test-service-np\"\nI1006 23:57:06.815154       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:06.860710       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.59619ms\"\nI1006 23:57:16.025670       1 service.go:301] Service webhook-9541/e2e-test-webhook updated: 1 ports\nI1006 23:57:16.025750       1 service.go:416] Adding new service port \"webhook-9541/e2e-test-webhook\" at 100.69.55.162:8443/TCP\nI1006 23:57:16.025983       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:16.132793       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"107.055896ms\"\nI1006 23:57:16.132888       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:16.211188       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"78.346498ms\"\nI1006 23:57:17.909866       1 service.go:301] Service services-6745/service-proxy-toggled updated: 1 ports\nI1006 23:57:17.909917       1 service.go:416] Adding new service port \"services-6745/service-proxy-toggled\" at 100.68.36.203:80/TCP\nI1006 23:57:17.909981       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:17.952903       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"42.980788ms\"\nI1006 23:57:18.953806       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:19.061079       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"107.34428ms\"\nI1006 23:57:20.665446       1 service.go:301] Service webhook-9541/e2e-test-webhook updated: 0 ports\nI1006 23:57:20.665706       1 service.go:441] Removing service port \"webhook-9541/e2e-test-webhook\"\nI1006 23:57:20.666120       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:20.788590       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"122.896693ms\"\nI1006 23:57:20.788682       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:20.905294       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"116.65208ms\"\nI1006 23:57:21.793393       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:21.843842       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.498974ms\"\nI1006 23:57:22.844945       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:22.892643       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.774063ms\"\nI1006 23:57:23.894025       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:23.938016       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"44.256868ms\"\nI1006 23:57:24.938886       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:24.991522       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"52.740974ms\"\nI1006 23:57:27.764284       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:27.812228       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.004118ms\"\nI1006 23:57:27.895975       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:27.937243       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"41.333136ms\"\nI1006 23:57:28.381671       1 service.go:301] Service services-7315/service-headless-toggled updated: 1 ports\nI1006 23:57:28.937488       1 service.go:416] Adding new service port \"services-7315/service-headless-toggled\" at 100.66.130.182:80/TCP\nI1006 23:57:28.937613       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:28.984647       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.218124ms\"\nI1006 23:57:30.752540       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:30.802047       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.577035ms\"\nI1006 23:57:32.790441       1 service.go:301] Service services-9566/externalname-service updated: 1 ports\nI1006 23:57:32.790504       1 service.go:416] Adding new service port \"services-9566/externalname-service:http\" at 100.66.116.23:80/TCP\nI1006 23:57:32.790567       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:32.866802       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"76.288175ms\"\nI1006 23:57:32.866900       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:32.995734       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"128.872381ms\"\nI1006 23:57:35.461602       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:35.513206       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"51.655743ms\"\nI1006 23:57:35.513315       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:35.563266       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.014573ms\"\nI1006 23:57:42.574859       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:42.642699       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"67.918744ms\"\nI1006 23:57:50.371418       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:50.420751       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.403593ms\"\nI1006 23:57:51.971005       1 service.go:301] Service services-6745/service-proxy-toggled updated: 0 ports\nI1006 23:57:51.971061       1 service.go:441] Removing service port \"services-6745/service-proxy-toggled\"\nI1006 23:57:51.971128       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:52.031567       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"60.49158ms\"\nI1006 23:57:52.031753       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:52.120607       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"88.981851ms\"\nI1006 23:57:56.889843       1 service.go:301] Service services-9566/externalname-service updated: 0 ports\nI1006 23:57:56.889898       1 service.go:441] Removing service port \"services-9566/externalname-service:http\"\nI1006 23:57:56.889962       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:56.934189       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"44.283166ms\"\nI1006 23:57:56.934285       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:56.983519       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.282225ms\"\nI1006 23:57:57.375980       1 service.go:301] Service webhook-6830/e2e-test-webhook updated: 1 ports\nI1006 23:57:57.983659       1 service.go:416] Adding new service port \"webhook-6830/e2e-test-webhook\" at 100.64.49.140:8443/TCP\nI1006 23:57:57.983780       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:58.035974       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"52.366414ms\"\nI1006 23:57:58.603638       1 service.go:301] Service services-6745/service-proxy-toggled updated: 1 ports\nI1006 23:57:59.036154       1 service.go:416] Adding new service port \"services-6745/service-proxy-toggled\" at 100.68.36.203:80/TCP\nI1006 23:57:59.036278       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:57:59.083109       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.981059ms\"\nI1006 23:58:01.551124       1 service.go:301] Service webhook-6830/e2e-test-webhook updated: 0 ports\nI1006 23:58:01.551179       1 service.go:441] Removing service port \"webhook-6830/e2e-test-webhook\"\nI1006 23:58:01.551244       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:01.684361       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"133.16719ms\"\nI1006 23:58:01.684607       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:01.808203       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"123.794811ms\"\nI1006 23:58:06.414416       1 service.go:301] Service services-15/nodeport-reuse updated: 1 ports\nI1006 23:58:06.414477       1 service.go:416] Adding new service port \"services-15/nodeport-reuse\" at 100.70.124.26:80/TCP\nI1006 23:58:06.414548       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:06.452692       1 service.go:301] Service services-15/nodeport-reuse updated: 0 ports\nI1006 23:58:06.480017       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-15/nodeport-reuse\\\" (:30013/tcp4)\"\nI1006 23:58:06.486791       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"72.30782ms\"\nI1006 23:58:06.486829       1 service.go:441] Removing service port \"services-15/nodeport-reuse\"\nI1006 23:58:06.486897       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:06.532193       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.356523ms\"\nI1006 23:58:12.136720       1 service.go:301] Service webhook-3653/e2e-test-webhook updated: 1 ports\nI1006 23:58:12.136787       1 service.go:416] Adding new service port \"webhook-3653/e2e-test-webhook\" at 100.67.254.66:8443/TCP\nI1006 23:58:12.136856       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:12.183265       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.474487ms\"\nI1006 23:58:12.183508       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:12.228945       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.626513ms\"\nI1006 23:58:13.410536       1 service.go:301] Service webhook-3653/e2e-test-webhook updated: 0 ports\nI1006 23:58:13.410599       1 service.go:441] Removing service port \"webhook-3653/e2e-test-webhook\"\nI1006 23:58:13.410685       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:13.460645       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.033997ms\"\nI1006 23:58:14.460955       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:14.515613       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"54.753559ms\"\nI1006 23:58:15.075881       1 service.go:301] Service services-15/nodeport-reuse updated: 1 ports\nI1006 23:58:15.516155       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:15.561425       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.354188ms\"\nI1006 23:58:18.220543       1 service.go:301] Service services-7315/service-headless-toggled updated: 0 ports\nI1006 23:58:18.220595       1 service.go:441] Removing service port \"services-7315/service-headless-toggled\"\nI1006 23:58:18.220661       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:18.266830       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.217775ms\"\nI1006 23:58:23.232267       1 service.go:301] Service svc-latency-2393/latency-svc-s5x4w updated: 1 ports\nI1006 23:58:23.232321       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-s5x4w\" at 100.69.135.157:80/TCP\nI1006 23:58:23.232380       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:23.275534       1 service.go:301] Service svc-latency-2393/latency-svc-ww5ng updated: 1 ports\nI1006 23:58:23.278380       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.052902ms\"\nI1006 23:58:23.278428       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ww5ng\" at 100.69.155.78:80/TCP\nI1006 23:58:23.278531       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:23.308359       1 service.go:301] Service svc-latency-2393/latency-svc-jn2lf updated: 1 ports\nI1006 23:58:23.327365       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.941757ms\"\nI1006 23:58:23.359211       1 service.go:301] Service svc-latency-2393/latency-svc-9mctz updated: 1 ports\nI1006 23:58:23.381524       1 service.go:301] Service svc-latency-2393/latency-svc-9z82n updated: 1 ports\nI1006 23:58:23.418184       1 service.go:301] Service svc-latency-2393/latency-svc-6f9l6 updated: 1 ports\nI1006 23:58:23.422594       1 service.go:301] Service svc-latency-2393/latency-svc-tx546 updated: 1 ports\nI1006 23:58:23.434503       1 service.go:301] Service svc-latency-2393/latency-svc-svvp8 updated: 1 ports\nI1006 23:58:23.456543       1 service.go:301] Service svc-latency-2393/latency-svc-wrpzl updated: 1 ports\nI1006 23:58:23.465220       1 service.go:301] Service svc-latency-2393/latency-svc-ph7hq updated: 1 ports\nI1006 23:58:23.477562       1 service.go:301] Service svc-latency-2393/latency-svc-t7d5t updated: 1 ports\nI1006 23:58:23.483834       1 service.go:301] Service svc-latency-2393/latency-svc-8tzdd updated: 1 ports\nI1006 23:58:23.494143       1 service.go:301] Service svc-latency-2393/latency-svc-4d8qt updated: 1 ports\nI1006 23:58:23.504918       1 service.go:301] Service svc-latency-2393/latency-svc-dsrz7 updated: 1 ports\nI1006 23:58:23.520828       1 service.go:301] Service svc-latency-2393/latency-svc-trffr updated: 1 ports\nI1006 23:58:23.536008       1 service.go:301] Service svc-latency-2393/latency-svc-twbnv updated: 1 ports\nI1006 23:58:23.552567       1 service.go:301] Service svc-latency-2393/latency-svc-xpdff updated: 1 ports\nI1006 23:58:23.571617       1 service.go:301] Service svc-latency-2393/latency-svc-cwhx4 updated: 1 ports\nI1006 23:58:23.588705       1 service.go:301] Service svc-latency-2393/latency-svc-8rhqc updated: 1 ports\nI1006 23:58:23.603709       1 service.go:301] Service svc-latency-2393/latency-svc-25pc5 updated: 1 ports\nI1006 23:58:23.628037       1 service.go:301] Service svc-latency-2393/latency-svc-6tq7k updated: 1 ports\nI1006 23:58:23.643644       1 service.go:301] Service svc-latency-2393/latency-svc-ljwk7 updated: 1 ports\nI1006 23:58:23.649799       1 service.go:301] Service svc-latency-2393/latency-svc-65x5t updated: 1 ports\nI1006 23:58:23.659473       1 service.go:301] Service svc-latency-2393/latency-svc-hfbbl updated: 1 ports\nI1006 23:58:23.679387       1 service.go:301] Service svc-latency-2393/latency-svc-bhkxf updated: 1 ports\nI1006 23:58:23.684075       1 service.go:301] Service svc-latency-2393/latency-svc-8t9bn updated: 1 ports\nI1006 23:58:23.710840       1 service.go:301] Service svc-latency-2393/latency-svc-f5vlj updated: 1 ports\nI1006 23:58:23.712796       1 service.go:301] Service svc-latency-2393/latency-svc-p6hmv updated: 1 ports\nI1006 23:58:23.724941       1 service.go:301] Service svc-latency-2393/latency-svc-t879p updated: 1 ports\nI1006 23:58:23.756697       1 service.go:301] Service svc-latency-2393/latency-svc-xh9p9 updated: 1 ports\nI1006 23:58:23.782088       1 service.go:301] Service svc-latency-2393/latency-svc-sm4rn updated: 1 ports\nI1006 23:58:23.788693       1 service.go:301] Service svc-latency-2393/latency-svc-2vmx5 updated: 1 ports\nI1006 23:58:23.802789       1 service.go:301] Service svc-latency-2393/latency-svc-9znmp updated: 1 ports\nI1006 23:58:23.822276       1 service.go:301] Service svc-latency-2393/latency-svc-8cjfh updated: 1 ports\nI1006 23:58:23.834171       1 service.go:301] Service svc-latency-2393/latency-svc-g9zgv updated: 1 ports\nI1006 23:58:23.846403       1 service.go:301] Service svc-latency-2393/latency-svc-ntzqt updated: 1 ports\nI1006 23:58:23.850030       1 service.go:301] Service svc-latency-2393/latency-svc-r76jp updated: 1 ports\nI1006 23:58:23.930857       1 service.go:301] Service svc-latency-2393/latency-svc-c5kdc updated: 1 ports\nI1006 23:58:23.969156       1 service.go:301] Service svc-latency-2393/latency-svc-kmgds updated: 1 ports\nI1006 23:58:24.034429       1 service.go:301] Service svc-latency-2393/latency-svc-4khm5 updated: 1 ports\nI1006 23:58:24.039733       1 service.go:301] Service svc-latency-2393/latency-svc-9wr74 updated: 1 ports\nI1006 23:58:24.069822       1 service.go:301] Service svc-latency-2393/latency-svc-snxws updated: 1 ports\nI1006 23:58:24.089197       1 service.go:301] Service svc-latency-2393/latency-svc-sg7mj updated: 1 ports\nI1006 23:58:24.105092       1 service.go:301] Service svc-latency-2393/latency-svc-mjtlp updated: 1 ports\nI1006 23:58:24.116522       1 service.go:301] Service svc-latency-2393/latency-svc-kqcz4 updated: 1 ports\nI1006 23:58:24.137071       1 service.go:301] Service svc-latency-2393/latency-svc-4sq28 updated: 1 ports\nI1006 23:58:24.137534       1 service.go:301] Service svc-latency-2393/latency-svc-fq2rs updated: 1 ports\nI1006 23:58:24.155380       1 service.go:301] Service svc-latency-2393/latency-svc-xs5sp updated: 1 ports\nI1006 23:58:24.185188       1 service.go:301] Service svc-latency-2393/latency-svc-rbqdn updated: 1 ports\nI1006 23:58:24.194865       1 service.go:301] Service svc-latency-2393/latency-svc-qs9qp updated: 1 ports\nI1006 23:58:24.208664       1 service.go:301] Service svc-latency-2393/latency-svc-x96jb updated: 1 ports\nI1006 23:58:24.213474       1 service.go:301] Service svc-latency-2393/latency-svc-lmhqf updated: 1 ports\nI1006 23:58:24.216167       1 service.go:301] Service svc-latency-2393/latency-svc-pjp4l updated: 1 ports\nI1006 23:58:24.224011       1 service.go:301] Service svc-latency-2393/latency-svc-rvbxj updated: 1 ports\nI1006 23:58:24.228387       1 service.go:301] Service svc-latency-2393/latency-svc-dtwqp updated: 1 ports\nI1006 23:58:24.242752       1 service.go:301] Service svc-latency-2393/latency-svc-nzkkj updated: 1 ports\nI1006 23:58:24.243046       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ntzqt\" at 100.66.49.214:80/TCP\nI1006 23:58:24.243135       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8rhqc\" at 100.66.61.201:80/TCP\nI1006 23:58:24.243225       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ljwk7\" at 100.68.127.175:80/TCP\nI1006 23:58:24.243302       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-f5vlj\" at 100.69.91.248:80/TCP\nI1006 23:58:24.243383       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-snxws\" at 100.68.162.55:80/TCP\nI1006 23:58:24.243487       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-dtwqp\" at 100.71.85.116:80/TCP\nI1006 23:58:24.243577       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xpdff\" at 100.67.130.62:80/TCP\nI1006 23:58:24.243662       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6f9l6\" at 100.67.136.118:80/TCP\nI1006 23:58:24.243747       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-p6hmv\" at 100.65.127.56:80/TCP\nI1006 23:58:24.243827       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9z82n\" at 100.68.206.18:80/TCP\nI1006 23:58:24.243908       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rbqdn\" at 100.66.38.196:80/TCP\nI1006 23:58:24.243987       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-x96jb\" at 100.66.26.12:80/TCP\nI1006 23:58:24.244070       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-t879p\" at 100.68.21.187:80/TCP\nI1006 23:58:24.244194       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-nzkkj\" at 100.68.123.63:80/TCP\nI1006 23:58:24.244270       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-trffr\" at 100.69.231.20:80/TCP\nI1006 23:58:24.244353       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-r76jp\" at 100.68.15.15:80/TCP\nI1006 23:58:24.244430       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4khm5\" at 100.71.89.22:80/TCP\nI1006 23:58:24.244515       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-pjp4l\" at 100.66.27.6:80/TCP\nI1006 23:58:24.244594       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9mctz\" at 100.66.58.69:80/TCP\nI1006 23:58:24.244680       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sg7mj\" at 100.66.198.168:80/TCP\nI1006 23:58:24.244762       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xh9p9\" at 100.71.68.187:80/TCP\nI1006 23:58:24.244841       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2vmx5\" at 100.69.216.141:80/TCP\nI1006 23:58:24.244922       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-kmgds\" at 100.70.146.228:80/TCP\nI1006 23:58:24.245004       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4sq28\" at 100.71.187.56:80/TCP\nI1006 23:58:24.245082       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xs5sp\" at 100.68.117.227:80/TCP\nI1006 23:58:24.245163       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-qs9qp\" at 100.64.177.126:80/TCP\nI1006 23:58:24.245241       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-cwhx4\" at 100.66.152.174:80/TCP\nI1006 23:58:24.245316       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9znmp\" at 100.71.146.171:80/TCP\nI1006 23:58:24.245392       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-mjtlp\" at 100.71.8.204:80/TCP\nI1006 23:58:24.245481       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-kqcz4\" at 100.70.82.247:80/TCP\nI1006 23:58:24.245569       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-wrpzl\" at 100.64.138.72:80/TCP\nI1006 23:58:24.245659       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ph7hq\" at 100.67.185.76:80/TCP\nI1006 23:58:24.245752       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-fq2rs\" at 100.68.231.131:80/TCP\nI1006 23:58:24.245855       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rvbxj\" at 100.66.47.186:80/TCP\nI1006 23:58:24.245931       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-jn2lf\" at 100.68.33.5:80/TCP\nI1006 23:58:24.246019       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-t7d5t\" at 100.66.197.82:80/TCP\nI1006 23:58:24.246113       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8tzdd\" at 100.70.96.11:80/TCP\nI1006 23:58:24.246174       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-dsrz7\" at 100.69.123.66:80/TCP\nI1006 23:58:24.246269       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-25pc5\" at 100.65.58.136:80/TCP\nI1006 23:58:24.246349       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6tq7k\" at 100.68.95.32:80/TCP\nI1006 23:58:24.246430       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-hfbbl\" at 100.67.81.36:80/TCP\nI1006 23:58:24.246511       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sm4rn\" at 100.68.19.90:80/TCP\nI1006 23:58:24.246591       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-bhkxf\" at 100.69.106.65:80/TCP\nI1006 23:58:24.246680       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-c5kdc\" at 100.66.170.89:80/TCP\nI1006 23:58:24.246763       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9wr74\" at 100.67.234.202:80/TCP\nI1006 23:58:24.246843       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-lmhqf\" at 100.64.30.102:80/TCP\nI1006 23:58:24.246934       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-twbnv\" at 100.64.175.60:80/TCP\nI1006 23:58:24.247017       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-65x5t\" at 100.70.46.38:80/TCP\nI1006 23:58:24.247102       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8cjfh\" at 100.71.51.213:80/TCP\nI1006 23:58:24.247179       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4d8qt\" at 100.64.193.85:80/TCP\nI1006 23:58:24.247257       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-svvp8\" at 100.64.26.238:80/TCP\nI1006 23:58:24.247335       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8t9bn\" at 100.68.253.246:80/TCP\nI1006 23:58:24.247397       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-g9zgv\" at 100.71.202.47:80/TCP\nI1006 23:58:24.247492       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-tx546\" at 100.71.129.35:80/TCP\nI1006 23:58:24.248210       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:24.258221       1 service.go:301] Service svc-latency-2393/latency-svc-ghnmt updated: 1 ports\nI1006 23:58:24.268915       1 service.go:301] Service svc-latency-2393/latency-svc-9vlxv updated: 1 ports\nI1006 23:58:24.288477       1 service.go:301] Service svc-latency-2393/latency-svc-5z6dn updated: 1 ports\nI1006 23:58:24.308193       1 service.go:301] Service svc-latency-2393/latency-svc-kt9c2 updated: 1 ports\nI1006 23:58:24.313566       1 service.go:301] Service svc-latency-2393/latency-svc-sqjb2 updated: 1 ports\nI1006 23:58:24.344600       1 service.go:301] Service svc-latency-2393/latency-svc-pkdj4 updated: 1 ports\nI1006 23:58:24.366644       1 service.go:301] Service svc-latency-2393/latency-svc-2v4f6 updated: 1 ports\nI1006 23:58:24.384938       1 service.go:301] Service svc-latency-2393/latency-svc-q5pc8 updated: 1 ports\nI1006 23:58:24.399644       1 service.go:301] Service svc-latency-2393/latency-svc-hbrxx updated: 1 ports\nI1006 23:58:24.410443       1 service.go:301] Service svc-latency-2393/latency-svc-4rntd updated: 1 ports\nI1006 23:58:24.448976       1 service.go:301] Service svc-latency-2393/latency-svc-n4pq2 updated: 1 ports\nI1006 23:58:24.482762       1 service.go:301] Service svc-latency-2393/latency-svc-vrx8c updated: 1 ports\nI1006 23:58:24.484573       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"241.530856ms\"\nI1006 23:58:24.522737       1 service.go:301] Service svc-latency-2393/latency-svc-72blr updated: 1 ports\nI1006 23:58:24.572691       1 service.go:301] Service svc-latency-2393/latency-svc-qsxlg updated: 1 ports\nI1006 23:58:24.621440       1 service.go:301] Service svc-latency-2393/latency-svc-2ht6w updated: 1 ports\nI1006 23:58:24.692793       1 service.go:301] Service svc-latency-2393/latency-svc-d7x2s updated: 1 ports\nI1006 23:58:24.736742       1 service.go:301] Service svc-latency-2393/latency-svc-dd9dt updated: 1 ports\nI1006 23:58:24.800932       1 service.go:301] Service svc-latency-2393/latency-svc-pws45 updated: 1 ports\nI1006 23:58:24.879643       1 service.go:301] Service svc-latency-2393/latency-svc-mqgrv updated: 1 ports\nI1006 23:58:24.896068       1 service.go:301] Service svc-latency-2393/latency-svc-gk844 updated: 1 ports\nI1006 23:58:24.924760       1 service.go:301] Service svc-latency-2393/latency-svc-q6rjz updated: 1 ports\nI1006 23:58:24.988264       1 service.go:301] Service svc-latency-2393/latency-svc-m5g7n updated: 1 ports\nI1006 23:58:25.040831       1 service.go:301] Service svc-latency-2393/latency-svc-2f6b8 updated: 1 ports\nI1006 23:58:25.073901       1 service.go:301] Service svc-latency-2393/latency-svc-qrk2d updated: 1 ports\nI1006 23:58:25.122642       1 service.go:301] Service svc-latency-2393/latency-svc-nr2rl updated: 1 ports\nI1006 23:58:25.171587       1 service.go:301] Service svc-latency-2393/latency-svc-vgb5r updated: 1 ports\nI1006 23:58:25.220936       1 service.go:301] Service svc-latency-2393/latency-svc-7qttt updated: 1 ports\nI1006 23:58:25.241681       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-dd9dt\" at 100.68.63.150:80/TCP\nI1006 23:58:25.241719       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-kt9c2\" at 100.70.181.171:80/TCP\nI1006 23:58:25.241732       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-m5g7n\" at 100.68.50.188:80/TCP\nI1006 23:58:25.241742       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-72blr\" at 100.67.87.72:80/TCP\nI1006 23:58:25.241753       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2ht6w\" at 100.68.47.173:80/TCP\nI1006 23:58:25.241764       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-7qttt\" at 100.65.61.100:80/TCP\nI1006 23:58:25.241773       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9vlxv\" at 100.69.156.141:80/TCP\nI1006 23:58:25.241784       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-q5pc8\" at 100.66.29.144:80/TCP\nI1006 23:58:25.241794       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-vgb5r\" at 100.68.107.120:80/TCP\nI1006 23:58:25.241804       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-d7x2s\" at 100.67.31.208:80/TCP\nI1006 23:58:25.241814       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-pws45\" at 100.69.250.178:80/TCP\nI1006 23:58:25.241847       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-nr2rl\" at 100.64.255.64:80/TCP\nI1006 23:58:25.241859       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-5z6dn\" at 100.66.6.26:80/TCP\nI1006 23:58:25.241871       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-n4pq2\" at 100.68.131.119:80/TCP\nI1006 23:58:25.241882       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-mqgrv\" at 100.65.155.73:80/TCP\nI1006 23:58:25.241893       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-gk844\" at 100.68.167.184:80/TCP\nI1006 23:58:25.241905       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-qsxlg\" at 100.70.200.109:80/TCP\nI1006 23:58:25.241916       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-vrx8c\" at 100.70.190.69:80/TCP\nI1006 23:58:25.242189       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2f6b8\" at 100.71.97.214:80/TCP\nI1006 23:58:25.242474       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2v4f6\" at 100.67.48.188:80/TCP\nI1006 23:58:25.242598       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-hbrxx\" at 100.70.187.231:80/TCP\nI1006 23:58:25.242682       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4rntd\" at 100.67.64.32:80/TCP\nI1006 23:58:25.242725       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-q6rjz\" at 100.70.216.63:80/TCP\nI1006 23:58:25.242782       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-qrk2d\" at 100.71.28.238:80/TCP\nI1006 23:58:25.242887       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ghnmt\" at 100.64.17.77:80/TCP\nI1006 23:58:25.242909       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sqjb2\" at 100.67.195.246:80/TCP\nI1006 23:58:25.242925       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-pkdj4\" at 100.69.31.80:80/TCP\nI1006 23:58:25.243394       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:25.278756       1 service.go:301] Service svc-latency-2393/latency-svc-r5wxn updated: 1 ports\nI1006 23:58:25.305269       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"63.595597ms\"\nI1006 23:58:25.324624       1 service.go:301] Service svc-latency-2393/latency-svc-kgc4n updated: 1 ports\nI1006 23:58:25.370323       1 service.go:301] Service svc-latency-2393/latency-svc-s7pjz updated: 1 ports\nI1006 23:58:25.420738       1 service.go:301] Service svc-latency-2393/latency-svc-p5gsf updated: 1 ports\nI1006 23:58:25.472446       1 service.go:301] Service svc-latency-2393/latency-svc-ptrkr updated: 1 ports\nI1006 23:58:25.523893       1 service.go:301] Service svc-latency-2393/latency-svc-ftpls updated: 1 ports\nI1006 23:58:25.579449       1 service.go:301] Service svc-latency-2393/latency-svc-rvg8x updated: 1 ports\nI1006 23:58:25.625766       1 service.go:301] Service svc-latency-2393/latency-svc-j7zmj updated: 1 ports\nI1006 23:58:25.678382       1 service.go:301] Service svc-latency-2393/latency-svc-cl57v updated: 1 ports\nI1006 23:58:25.724830       1 service.go:301] Service svc-latency-2393/latency-svc-lkcg4 updated: 1 ports\nI1006 23:58:25.777625       1 service.go:301] Service svc-latency-2393/latency-svc-lz54r updated: 1 ports\nI1006 23:58:25.825972       1 service.go:301] Service svc-latency-2393/latency-svc-k4jsf updated: 1 ports\nI1006 23:58:25.877749       1 service.go:301] Service svc-latency-2393/latency-svc-x7857 updated: 1 ports\nI1006 23:58:25.927314       1 service.go:301] Service svc-latency-2393/latency-svc-wwvrh updated: 1 ports\nI1006 23:58:25.988914       1 service.go:301] Service svc-latency-2393/latency-svc-wm7kf updated: 1 ports\nI1006 23:58:26.029483       1 service.go:301] Service svc-latency-2393/latency-svc-xqd76 updated: 1 ports\nI1006 23:58:26.071031       1 service.go:301] Service svc-latency-2393/latency-svc-zr7lv updated: 1 ports\nI1006 23:58:26.133329       1 service.go:301] Service svc-latency-2393/latency-svc-2zbkg updated: 1 ports\nI1006 23:58:26.181881       1 service.go:301] Service svc-latency-2393/latency-svc-2s2mf updated: 1 ports\nI1006 23:58:26.223590       1 service.go:301] Service svc-latency-2393/latency-svc-v8qq2 updated: 1 ports\nI1006 23:58:26.242681       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-lz54r\" at 100.65.192.95:80/TCP\nI1006 23:58:26.242719       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-x7857\" at 100.71.166.4:80/TCP\nI1006 23:58:26.242732       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-wm7kf\" at 100.65.82.104:80/TCP\nI1006 23:58:26.242743       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2zbkg\" at 100.70.188.23:80/TCP\nI1006 23:58:26.242754       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-p5gsf\" at 100.66.165.114:80/TCP\nI1006 23:58:26.242764       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ftpls\" at 100.64.143.25:80/TCP\nI1006 23:58:26.242776       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rvg8x\" at 100.69.125.155:80/TCP\nI1006 23:58:26.242787       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-v8qq2\" at 100.64.27.194:80/TCP\nI1006 23:58:26.242808       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-j7zmj\" at 100.67.237.129:80/TCP\nI1006 23:58:26.242819       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-cl57v\" at 100.69.213.69:80/TCP\nI1006 23:58:26.242831       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-k4jsf\" at 100.71.43.3:80/TCP\nI1006 23:58:26.242841       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-s7pjz\" at 100.67.55.36:80/TCP\nI1006 23:58:26.242851       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xqd76\" at 100.70.45.202:80/TCP\nI1006 23:58:26.242861       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2s2mf\" at 100.71.104.99:80/TCP\nI1006 23:58:26.242873       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-lkcg4\" at 100.68.199.120:80/TCP\nI1006 23:58:26.242884       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-wwvrh\" at 100.66.193.177:80/TCP\nI1006 23:58:26.242895       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-zr7lv\" at 100.66.102.97:80/TCP\nI1006 23:58:26.242921       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-r5wxn\" at 100.64.94.70:80/TCP\nI1006 23:58:26.242933       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-kgc4n\" at 100.71.254.77:80/TCP\nI1006 23:58:26.242960       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ptrkr\" at 100.64.203.215:80/TCP\nI1006 23:58:26.243420       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:26.282596       1 service.go:301] Service svc-latency-2393/latency-svc-jmqfb updated: 1 ports\nI1006 23:58:26.310997       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"68.3226ms\"\nI1006 23:58:26.410959       1 service.go:301] Service svc-latency-2393/latency-svc-z6dpv updated: 1 ports\nI1006 23:58:26.435522       1 service.go:301] Service svc-latency-2393/latency-svc-8p67f updated: 1 ports\nI1006 23:58:26.458692       1 service.go:301] Service svc-latency-2393/latency-svc-bfk2f updated: 1 ports\nI1006 23:58:26.492283       1 service.go:301] Service svc-latency-2393/latency-svc-zgz8f updated: 1 ports\nI1006 23:58:26.527712       1 service.go:301] Service svc-latency-2393/latency-svc-z29ch updated: 1 ports\nI1006 23:58:26.592639       1 service.go:301] Service svc-latency-2393/latency-svc-fmp5f updated: 1 ports\nI1006 23:58:26.640166       1 service.go:301] Service svc-latency-2393/latency-svc-n7xzt updated: 1 ports\nI1006 23:58:26.687397       1 service.go:301] Service svc-latency-2393/latency-svc-86bkf updated: 1 ports\nI1006 23:58:26.772678       1 service.go:301] Service svc-latency-2393/latency-svc-fztg4 updated: 1 ports\nI1006 23:58:26.823059       1 service.go:301] Service svc-latency-2393/latency-svc-mpn8x updated: 1 ports\nI1006 23:58:26.871475       1 service.go:301] Service svc-latency-2393/latency-svc-l2rmx updated: 1 ports\nI1006 23:58:26.917732       1 service.go:301] Service svc-latency-2393/latency-svc-nb9qf updated: 1 ports\nI1006 23:58:26.998712       1 service.go:301] Service svc-latency-2393/latency-svc-78249 updated: 1 ports\nI1006 23:58:27.032242       1 service.go:301] Service svc-latency-2393/latency-svc-kwdlc updated: 1 ports\nI1006 23:58:27.090302       1 service.go:301] Service svc-latency-2393/latency-svc-8lxw5 updated: 1 ports\nI1006 23:58:27.138748       1 service.go:301] Service svc-latency-2393/latency-svc-xdldk updated: 1 ports\nI1006 23:58:27.183359       1 service.go:301] Service svc-latency-2393/latency-svc-88vj8 updated: 1 ports\nI1006 23:58:27.223273       1 service.go:301] Service svc-latency-2393/latency-svc-95dd8 updated: 1 ports\nI1006 23:58:27.239895       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-86bkf\" at 100.66.101.146:80/TCP\nI1006 23:58:27.239945       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-l2rmx\" at 100.64.20.187:80/TCP\nI1006 23:58:27.239999       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-88vj8\" at 100.66.231.169:80/TCP\nI1006 23:58:27.240017       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-z6dpv\" at 100.68.119.16:80/TCP\nI1006 23:58:27.240144       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-fmp5f\" at 100.67.242.254:80/TCP\nI1006 23:58:27.240297       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-n7xzt\" at 100.69.231.36:80/TCP\nI1006 23:58:27.240423       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-fztg4\" at 100.65.97.11:80/TCP\nI1006 23:58:27.240447       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xdldk\" at 100.70.121.146:80/TCP\nI1006 23:58:27.240493       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-jmqfb\" at 100.68.100.20:80/TCP\nI1006 23:58:27.240513       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8p67f\" at 100.69.45.24:80/TCP\nI1006 23:58:27.240551       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-mpn8x\" at 100.69.174.76:80/TCP\nI1006 23:58:27.240571       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-nb9qf\" at 100.69.148.228:80/TCP\nI1006 23:58:27.240582       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-78249\" at 100.68.197.231:80/TCP\nI1006 23:58:27.240620       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-kwdlc\" at 100.70.53.155:80/TCP\nI1006 23:58:27.240659       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8lxw5\" at 100.66.40.198:80/TCP\nI1006 23:58:27.240681       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-95dd8\" at 100.69.59.29:80/TCP\nI1006 23:58:27.240700       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-zgz8f\" at 100.68.82.254:80/TCP\nI1006 23:58:27.240718       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-z29ch\" at 100.70.59.18:80/TCP\nI1006 23:58:27.240776       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-bfk2f\" at 100.71.23.207:80/TCP\nI1006 23:58:27.241375       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:27.275189       1 service.go:301] Service svc-latency-2393/latency-svc-zgfcn updated: 1 ports\nI1006 23:58:27.308290       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"68.400125ms\"\nI1006 23:58:27.351819       1 service.go:301] Service svc-latency-2393/latency-svc-zjs27 updated: 1 ports\nI1006 23:58:27.390345       1 service.go:301] Service svc-latency-2393/latency-svc-c8968 updated: 1 ports\nI1006 23:58:27.427596       1 service.go:301] Service svc-latency-2393/latency-svc-d4nf7 updated: 1 ports\nI1006 23:58:27.479032       1 service.go:301] Service svc-latency-2393/latency-svc-ggr9f updated: 1 ports\nI1006 23:58:27.523510       1 service.go:301] Service svc-latency-2393/latency-svc-ldq82 updated: 1 ports\nI1006 23:58:27.584498       1 service.go:301] Service svc-latency-2393/latency-svc-rq2f4 updated: 1 ports\nI1006 23:58:27.629731       1 service.go:301] Service svc-latency-2393/latency-svc-4mcw7 updated: 1 ports\nI1006 23:58:27.684504       1 service.go:301] Service svc-latency-2393/latency-svc-rbgp8 updated: 1 ports\nI1006 23:58:27.734845       1 service.go:301] Service svc-latency-2393/latency-svc-gv4lt updated: 1 ports\nI1006 23:58:27.786012       1 service.go:301] Service svc-latency-2393/latency-svc-ncgrr updated: 1 ports\nI1006 23:58:27.824696       1 service.go:301] Service svc-latency-2393/latency-svc-bgk5c updated: 1 ports\nI1006 23:58:27.954940       1 service.go:301] Service svc-latency-2393/latency-svc-2mfr7 updated: 1 ports\nI1006 23:58:28.005302       1 service.go:301] Service svc-latency-2393/latency-svc-45s85 updated: 1 ports\nI1006 23:58:28.039692       1 service.go:301] Service svc-latency-2393/latency-svc-q84bn updated: 1 ports\nI1006 23:58:28.087427       1 service.go:301] Service svc-latency-2393/latency-svc-895d9 updated: 1 ports\nI1006 23:58:28.109077       1 service.go:301] Service svc-latency-2393/latency-svc-6s79h updated: 1 ports\nI1006 23:58:28.145917       1 service.go:301] Service svc-latency-2393/latency-svc-8bv9v updated: 1 ports\nI1006 23:58:28.223834       1 service.go:301] Service svc-latency-2393/latency-svc-xjw58 updated: 1 ports\nI1006 23:58:28.237858       1 service.go:301] Service svc-latency-2393/latency-svc-f9f5k updated: 1 ports\nI1006 23:58:28.237923       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2mfr7\" at 100.69.22.93:80/TCP\nI1006 23:58:28.237957       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-8bv9v\" at 100.65.151.88:80/TCP\nI1006 23:58:28.237971       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-zgfcn\" at 100.68.94.191:80/TCP\nI1006 23:58:28.237982       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ldq82\" at 100.64.140.226:80/TCP\nI1006 23:58:28.237995       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4mcw7\" at 100.66.140.128:80/TCP\nI1006 23:58:28.238006       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-gv4lt\" at 100.65.148.7:80/TCP\nI1006 23:58:28.238034       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ncgrr\" at 100.65.110.85:80/TCP\nI1006 23:58:28.238046       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-bgk5c\" at 100.65.213.207:80/TCP\nI1006 23:58:28.238071       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-c8968\" at 100.70.149.11:80/TCP\nI1006 23:58:28.238089       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ggr9f\" at 100.71.215.40:80/TCP\nI1006 23:58:28.238107       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rq2f4\" at 100.70.254.79:80/TCP\nI1006 23:58:28.238118       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-q84bn\" at 100.71.251.175:80/TCP\nI1006 23:58:28.238130       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-zjs27\" at 100.66.206.217:80/TCP\nI1006 23:58:28.238148       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-895d9\" at 100.71.23.121:80/TCP\nI1006 23:58:28.238169       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xjw58\" at 100.69.219.160:80/TCP\nI1006 23:58:28.238187       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-d4nf7\" at 100.70.119.195:80/TCP\nI1006 23:58:28.238198       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rbgp8\" at 100.64.179.229:80/TCP\nI1006 23:58:28.238208       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-45s85\" at 100.67.27.125:80/TCP\nI1006 23:58:28.238233       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6s79h\" at 100.71.11.71:80/TCP\nI1006 23:58:28.238250       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-f9f5k\" at 100.64.83.106:80/TCP\nI1006 23:58:28.238814       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:28.308305       1 service.go:301] Service svc-latency-2393/latency-svc-r57pf updated: 1 ports\nI1006 23:58:28.309431       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"71.500594ms\"\nI1006 23:58:28.405629       1 service.go:301] Service svc-latency-2393/latency-svc-9jfgq updated: 1 ports\nI1006 23:58:28.425593       1 service.go:301] Service svc-latency-2393/latency-svc-84jwb updated: 1 ports\nI1006 23:58:28.446998       1 service.go:301] Service svc-latency-2393/latency-svc-thr85 updated: 1 ports\nI1006 23:58:28.469881       1 service.go:301] Service svc-latency-2393/latency-svc-644q4 updated: 1 ports\nI1006 23:58:28.535635       1 service.go:301] Service svc-latency-2393/latency-svc-vmbwr updated: 1 ports\nI1006 23:58:28.576043       1 service.go:301] Service svc-latency-2393/latency-svc-z4dpb updated: 1 ports\nI1006 23:58:28.621960       1 service.go:301] Service svc-latency-2393/latency-svc-sg2w2 updated: 1 ports\nI1006 23:58:28.669723       1 service.go:301] Service svc-latency-2393/latency-svc-f8mr2 updated: 1 ports\nI1006 23:58:28.722946       1 service.go:301] Service svc-latency-2393/latency-svc-b47ct updated: 1 ports\nI1006 23:58:28.778478       1 service.go:301] Service svc-latency-2393/latency-svc-rmhph updated: 1 ports\nI1006 23:58:28.839932       1 service.go:301] Service svc-latency-2393/latency-svc-bqrrs updated: 1 ports\nI1006 23:58:28.871408       1 service.go:301] Service svc-latency-2393/latency-svc-ccshc updated: 1 ports\nI1006 23:58:28.960511       1 service.go:301] Service services-2188/clusterip-service updated: 1 ports\nI1006 23:58:28.980090       1 service.go:301] Service svc-latency-2393/latency-svc-ldldr updated: 1 ports\nI1006 23:58:28.996302       1 service.go:301] Service services-2188/externalsvc updated: 1 ports\nI1006 23:58:29.047377       1 service.go:301] Service svc-latency-2393/latency-svc-jbm7q updated: 1 ports\nI1006 23:58:29.091514       1 service.go:301] Service svc-latency-2393/latency-svc-b8cp5 updated: 1 ports\nI1006 23:58:29.138687       1 service.go:301] Service svc-latency-2393/latency-svc-6wvqq updated: 1 ports\nI1006 23:58:29.173820       1 service.go:301] Service svc-latency-2393/latency-svc-cwcq2 updated: 1 ports\nI1006 23:58:29.219272       1 service.go:301] Service svc-latency-2393/latency-svc-c9bqk updated: 1 ports\nI1006 23:58:29.238464       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-bqrrs\" at 100.66.222.25:80/TCP\nI1006 23:58:29.238506       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ldldr\" at 100.70.145.50:80/TCP\nI1006 23:58:29.238521       1 service.go:416] Adding new service port \"services-2188/externalsvc\" at 100.67.15.109:80/TCP\nI1006 23:58:29.238532       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6wvqq\" at 100.66.42.185:80/TCP\nI1006 23:58:29.238568       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-c9bqk\" at 100.64.93.212:80/TCP\nI1006 23:58:29.238588       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-9jfgq\" at 100.65.63.163:80/TCP\nI1006 23:58:29.238602       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-84jwb\" at 100.69.245.177:80/TCP\nI1006 23:58:29.238615       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sg2w2\" at 100.66.99.238:80/TCP\nI1006 23:58:29.238627       1 service.go:416] Adding new service port \"services-2188/clusterip-service\" at 100.65.82.211:80/TCP\nI1006 23:58:29.238638       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-b8cp5\" at 100.66.52.53:80/TCP\nI1006 23:58:29.238651       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-cwcq2\" at 100.66.28.135:80/TCP\nI1006 23:58:29.238663       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-thr85\" at 100.70.212.241:80/TCP\nI1006 23:58:29.238676       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-f8mr2\" at 100.70.95.125:80/TCP\nI1006 23:58:29.238687       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rmhph\" at 100.66.202.40:80/TCP\nI1006 23:58:29.238699       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-jbm7q\" at 100.71.133.0:80/TCP\nI1006 23:58:29.238711       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-b47ct\" at 100.66.141.67:80/TCP\nI1006 23:58:29.238724       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ccshc\" at 100.64.239.104:80/TCP\nI1006 23:58:29.238735       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-r57pf\" at 100.65.49.96:80/TCP\nI1006 23:58:29.238749       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-644q4\" at 100.66.50.239:80/TCP\nI1006 23:58:29.238760       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-vmbwr\" at 100.64.176.200:80/TCP\nI1006 23:58:29.238771       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-z4dpb\" at 100.66.180.253:80/TCP\nI1006 23:58:29.239408       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:29.278564       1 service.go:301] Service svc-latency-2393/latency-svc-jw9bd updated: 1 ports\nI1006 23:58:29.321807       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"83.359674ms\"\nI1006 23:58:29.325497       1 service.go:301] Service svc-latency-2393/latency-svc-q8npc updated: 1 ports\nI1006 23:58:29.374100       1 service.go:301] Service svc-latency-2393/latency-svc-vxmjf updated: 1 ports\nI1006 23:58:29.422224       1 service.go:301] Service svc-latency-2393/latency-svc-gnp8t updated: 1 ports\nI1006 23:58:29.468209       1 service.go:301] Service svc-latency-2393/latency-svc-57dmk updated: 1 ports\nI1006 23:58:29.521539       1 service.go:301] Service svc-latency-2393/latency-svc-849zs updated: 1 ports\nI1006 23:58:29.577366       1 service.go:301] Service svc-latency-2393/latency-svc-q7xnp updated: 1 ports\nI1006 23:58:29.626131       1 service.go:301] Service svc-latency-2393/latency-svc-dhlgm updated: 1 ports\nI1006 23:58:29.671454       1 service.go:301] Service svc-latency-2393/latency-svc-6n5j5 updated: 1 ports\nI1006 23:58:29.770541       1 service.go:301] Service svc-latency-2393/latency-svc-xg4ff updated: 1 ports\nI1006 23:58:29.874356       1 service.go:301] Service svc-latency-2393/latency-svc-sxl4j updated: 1 ports\nI1006 23:58:29.921809       1 service.go:301] Service svc-latency-2393/latency-svc-7f76x updated: 1 ports\nI1006 23:58:29.979964       1 service.go:301] Service svc-latency-2393/latency-svc-bjp7n updated: 1 ports\nI1006 23:58:30.028454       1 service.go:301] Service svc-latency-2393/latency-svc-2j9h2 updated: 1 ports\nI1006 23:58:30.072355       1 service.go:301] Service svc-latency-2393/latency-svc-6fzt9 updated: 1 ports\nI1006 23:58:30.126392       1 service.go:301] Service svc-latency-2393/latency-svc-sx9kt updated: 1 ports\nI1006 23:58:30.175275       1 service.go:301] Service svc-latency-2393/latency-svc-h467x updated: 1 ports\nI1006 23:58:30.221366       1 service.go:301] Service svc-latency-2393/latency-svc-brlrb updated: 1 ports\nI1006 23:58:30.239578       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-vxmjf\" at 100.65.109.41:80/TCP\nI1006 23:58:30.239609       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-dhlgm\" at 100.66.223.244:80/TCP\nI1006 23:58:30.239624       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sxl4j\" at 100.70.57.250:80/TCP\nI1006 23:58:30.239636       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-bjp7n\" at 100.66.233.172:80/TCP\nI1006 23:58:30.239648       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-849zs\" at 100.68.240.211:80/TCP\nI1006 23:58:30.239661       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xg4ff\" at 100.70.68.200:80/TCP\nI1006 23:58:30.239672       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-7f76x\" at 100.68.108.104:80/TCP\nI1006 23:58:30.239683       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6fzt9\" at 100.64.165.179:80/TCP\nI1006 23:58:30.239697       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-sx9kt\" at 100.69.115.71:80/TCP\nI1006 23:58:30.239709       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-jw9bd\" at 100.70.191.80:80/TCP\nI1006 23:58:30.239720       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-q8npc\" at 100.66.185.46:80/TCP\nI1006 23:58:30.239733       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-h467x\" at 100.68.224.66:80/TCP\nI1006 23:58:30.239745       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-brlrb\" at 100.64.106.114:80/TCP\nI1006 23:58:30.239755       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-gnp8t\" at 100.71.27.244:80/TCP\nI1006 23:58:30.239766       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-57dmk\" at 100.71.120.19:80/TCP\nI1006 23:58:30.239776       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-q7xnp\" at 100.69.90.229:80/TCP\nI1006 23:58:30.239787       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6n5j5\" at 100.64.185.248:80/TCP\nI1006 23:58:30.239798       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2j9h2\" at 100.65.255.116:80/TCP\nI1006 23:58:30.240581       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:30.282522       1 service.go:301] Service svc-latency-2393/latency-svc-rtrd7 updated: 1 ports\nI1006 23:58:30.342443       1 service.go:301] Service svc-latency-2393/latency-svc-5xvq5 updated: 1 ports\nI1006 23:58:30.365263       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"125.689301ms\"\nI1006 23:58:30.404697       1 service.go:301] Service svc-latency-2393/latency-svc-t5tpd updated: 1 ports\nI1006 23:58:30.426808       1 service.go:301] Service svc-latency-2393/latency-svc-tzfpz updated: 1 ports\nI1006 23:58:30.476431       1 service.go:301] Service svc-latency-2393/latency-svc-rktpx updated: 1 ports\nI1006 23:58:30.528539       1 service.go:301] Service svc-latency-2393/latency-svc-vsss7 updated: 1 ports\nI1006 23:58:30.580615       1 service.go:301] Service svc-latency-2393/latency-svc-4wc8j updated: 1 ports\nI1006 23:58:30.624276       1 service.go:301] Service svc-latency-2393/latency-svc-tp5sl updated: 1 ports\nI1006 23:58:30.671914       1 service.go:301] Service svc-latency-2393/latency-svc-fvxws updated: 1 ports\nI1006 23:58:30.723648       1 service.go:301] Service svc-latency-2393/latency-svc-2zvvp updated: 1 ports\nI1006 23:58:30.772993       1 service.go:301] Service svc-latency-2393/latency-svc-ml7jn updated: 1 ports\nI1006 23:58:30.820708       1 service.go:301] Service services-7315/service-headless-toggled updated: 1 ports\nI1006 23:58:30.840848       1 service.go:301] Service svc-latency-2393/latency-svc-4qvp2 updated: 1 ports\nI1006 23:58:30.888621       1 service.go:301] Service svc-latency-2393/latency-svc-zkx4z updated: 1 ports\nI1006 23:58:30.927786       1 service.go:301] Service svc-latency-2393/latency-svc-nkxnz updated: 1 ports\nI1006 23:58:30.980049       1 service.go:301] Service svc-latency-2393/latency-svc-5jh9j updated: 1 ports\nI1006 23:58:31.023130       1 service.go:301] Service svc-latency-2393/latency-svc-hr77g updated: 1 ports\nI1006 23:58:31.075522       1 service.go:301] Service svc-latency-2393/latency-svc-xspnn updated: 1 ports\nI1006 23:58:31.136583       1 service.go:301] Service svc-latency-2393/latency-svc-6vb2d updated: 1 ports\nI1006 23:58:31.186009       1 service.go:301] Service svc-latency-2393/latency-svc-r5l7x updated: 1 ports\nI1006 23:58:31.227676       1 service.go:301] Service svc-latency-2393/latency-svc-n7qcb updated: 1 ports\nI1006 23:58:31.241445       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-6vb2d\" at 100.66.97.1:80/TCP\nI1006 23:58:31.241652       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-zkx4z\" at 100.66.237.36:80/TCP\nI1006 23:58:31.241771       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-nkxnz\" at 100.65.118.224:80/TCP\nI1006 23:58:31.242001       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-5jh9j\" at 100.66.207.96:80/TCP\nI1006 23:58:31.242107       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-hr77g\" at 100.65.96.247:80/TCP\nI1006 23:58:31.242213       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-xspnn\" at 100.71.153.244:80/TCP\nI1006 23:58:31.242320       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-r5l7x\" at 100.64.95.212:80/TCP\nI1006 23:58:31.242426       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rtrd7\" at 100.67.83.90:80/TCP\nI1006 23:58:31.242538       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-rktpx\" at 100.68.28.108:80/TCP\nI1006 23:58:31.242634       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-fvxws\" at 100.69.167.182:80/TCP\nI1006 23:58:31.242734       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4qvp2\" at 100.68.162.48:80/TCP\nI1006 23:58:31.242840       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-5xvq5\" at 100.66.217.93:80/TCP\nI1006 23:58:31.242941       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-tzfpz\" at 100.65.5.244:80/TCP\nI1006 23:58:31.243045       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-4wc8j\" at 100.67.200.104:80/TCP\nI1006 23:58:31.243166       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-2zvvp\" at 100.68.86.224:80/TCP\nI1006 23:58:31.243386       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ml7jn\" at 100.68.72.109:80/TCP\nI1006 23:58:31.243500       1 service.go:416] Adding new service port \"services-7315/service-headless-toggled\" at 100.66.130.182:80/TCP\nI1006 23:58:31.243603       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-n7qcb\" at 100.68.177.187:80/TCP\nI1006 23:58:31.243706       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-t5tpd\" at 100.66.214.61:80/TCP\nI1006 23:58:31.243816       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-vsss7\" at 100.70.106.99:80/TCP\nI1006 23:58:31.243923       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-tp5sl\" at 100.68.190.12:80/TCP\nI1006 23:58:31.244885       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:31.294010       1 service.go:301] Service svc-latency-2393/latency-svc-t7dgc updated: 1 ports\nI1006 23:58:31.339593       1 service.go:301] Service svc-latency-2393/latency-svc-ck88l updated: 1 ports\nI1006 23:58:31.375818       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"134.382773ms\"\nI1006 23:58:32.376952       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-t7dgc\" at 100.64.148.141:80/TCP\nI1006 23:58:32.377019       1 service.go:416] Adding new service port \"svc-latency-2393/latency-svc-ck88l\" at 100.64.209.223:80/TCP\nI1006 23:58:32.378567       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:32.508092       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"131.188939ms\"\nI1006 23:58:33.938148       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:34.053940       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"116.399813ms\"\nI1006 23:58:34.532105       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:34.648885       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"117.399812ms\"\nI1006 23:58:34.715325       1 service.go:301] Service services-6745/service-proxy-toggled updated: 0 ports\nI1006 23:58:35.649949       1 service.go:441] Removing service port \"services-6745/service-proxy-toggled\"\nI1006 23:58:35.650663       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:35.743988       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"94.038073ms\"\nI1006 23:58:36.744891       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:36.869778       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"125.53793ms\"\nI1006 23:58:37.265558       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:37.372128       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"107.232426ms\"\nI1006 23:58:38.205737       1 service.go:301] Service services-2188/clusterip-service updated: 0 ports\nI1006 23:58:38.265214       1 service.go:441] Removing service port \"services-2188/clusterip-service\"\nI1006 23:58:38.266683       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:38.371280       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"106.059588ms\"\nI1006 23:58:39.004150       1 service.go:301] Service svc-latency-2393/latency-svc-25pc5 updated: 0 ports\nI1006 23:58:39.018707       1 service.go:301] Service svc-latency-2393/latency-svc-2f6b8 updated: 0 ports\nI1006 23:58:39.037576       1 service.go:301] Service svc-latency-2393/latency-svc-2ht6w updated: 0 ports\nI1006 23:58:39.048570       1 service.go:301] Service svc-latency-2393/latency-svc-2j9h2 updated: 0 ports\nI1006 23:58:39.065564       1 service.go:301] Service svc-latency-2393/latency-svc-2mfr7 updated: 0 ports\nI1006 23:58:39.077409       1 service.go:301] Service svc-latency-2393/latency-svc-2s2mf updated: 0 ports\nI1006 23:58:39.094199       1 service.go:301] Service svc-latency-2393/latency-svc-2v4f6 updated: 0 ports\nI1006 23:58:39.107676       1 service.go:301] Service svc-latency-2393/latency-svc-2vmx5 updated: 0 ports\nI1006 23:58:39.121783       1 service.go:301] Service svc-latency-2393/latency-svc-2zbkg updated: 0 ports\nI1006 23:58:39.137567       1 service.go:301] Service svc-latency-2393/latency-svc-2zvvp updated: 0 ports\nI1006 23:58:39.151872       1 service.go:301] Service svc-latency-2393/latency-svc-45s85 updated: 0 ports\nI1006 23:58:39.166430       1 service.go:301] Service svc-latency-2393/latency-svc-4d8qt updated: 0 ports\nI1006 23:58:39.180239       1 service.go:301] Service svc-latency-2393/latency-svc-4khm5 updated: 0 ports\nI1006 23:58:39.203894       1 service.go:301] Service svc-latency-2393/latency-svc-4mcw7 updated: 0 ports\nI1006 23:58:39.264636       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2f6b8\"\nI1006 23:58:39.264687       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2ht6w\"\nI1006 23:58:39.264732       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2mfr7\"\nI1006 23:58:39.264750       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2v4f6\"\nI1006 23:58:39.264765       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2zvvp\"\nI1006 23:58:39.264797       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4khm5\"\nI1006 23:58:39.264819       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4mcw7\"\nI1006 23:58:39.264832       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2j9h2\"\nI1006 23:58:39.264850       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2s2mf\"\nI1006 23:58:39.264862       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2vmx5\"\nI1006 23:58:39.264874       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-2zbkg\"\nI1006 23:58:39.264889       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-25pc5\"\nI1006 23:58:39.264907       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-45s85\"\nI1006 23:58:39.264928       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4d8qt\"\nI1006 23:58:39.265806       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:39.354399       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"89.755465ms\"\nI1006 23:58:39.421888       1 service.go:301] Service svc-latency-2393/latency-svc-4qvp2 updated: 0 ports\nI1006 23:58:39.503268       1 service.go:301] Service svc-latency-2393/latency-svc-4rntd updated: 0 ports\nI1006 23:58:39.582147       1 service.go:301] Service svc-latency-2393/latency-svc-4sq28 updated: 0 ports\nI1006 23:58:39.612236       1 service.go:301] Service svc-latency-2393/latency-svc-4wc8j updated: 0 ports\nI1006 23:58:39.637944       1 service.go:301] Service svc-latency-2393/latency-svc-57dmk updated: 0 ports\nI1006 23:58:39.657674       1 service.go:301] Service svc-latency-2393/latency-svc-5jh9j updated: 0 ports\nI1006 23:58:39.682990       1 service.go:301] Service svc-latency-2393/latency-svc-5xvq5 updated: 0 ports\nI1006 23:58:39.703813       1 service.go:301] Service svc-latency-2393/latency-svc-5z6dn updated: 0 ports\nI1006 23:58:39.723226       1 service.go:301] Service svc-latency-2393/latency-svc-644q4 updated: 0 ports\nI1006 23:58:39.741276       1 service.go:301] Service svc-latency-2393/latency-svc-65x5t updated: 0 ports\nI1006 23:58:39.770166       1 service.go:301] Service svc-latency-2393/latency-svc-6f9l6 updated: 0 ports\nI1006 23:58:39.795719       1 service.go:301] Service svc-latency-2393/latency-svc-6fzt9 updated: 0 ports\nI1006 23:58:39.807766       1 service.go:301] Service svc-latency-2393/latency-svc-6n5j5 updated: 0 ports\nI1006 23:58:39.826108       1 service.go:301] Service svc-latency-2393/latency-svc-6s79h updated: 0 ports\nI1006 23:58:39.842528       1 service.go:301] Service svc-latency-2393/latency-svc-6tq7k updated: 0 ports\nI1006 23:58:39.878783       1 service.go:301] Service svc-latency-2393/latency-svc-6vb2d updated: 0 ports\nI1006 23:58:39.900542       1 service.go:301] Service svc-latency-2393/latency-svc-6wvqq updated: 0 ports\nI1006 23:58:39.916791       1 service.go:301] Service svc-latency-2393/latency-svc-72blr updated: 0 ports\nI1006 23:58:39.925314       1 service.go:301] Service svc-latency-2393/latency-svc-78249 updated: 0 ports\nI1006 23:58:39.934995       1 service.go:301] Service svc-latency-2393/latency-svc-7f76x updated: 0 ports\nI1006 23:58:39.950552       1 service.go:301] Service svc-latency-2393/latency-svc-7qttt updated: 0 ports\nI1006 23:58:39.968352       1 service.go:301] Service svc-latency-2393/latency-svc-849zs updated: 0 ports\nI1006 23:58:39.983342       1 service.go:301] Service svc-latency-2393/latency-svc-84jwb updated: 0 ports\nI1006 23:58:39.993486       1 service.go:301] Service svc-latency-2393/latency-svc-86bkf updated: 0 ports\nI1006 23:58:40.004652       1 service.go:301] Service svc-latency-2393/latency-svc-88vj8 updated: 0 ports\nI1006 23:58:40.021016       1 service.go:301] Service svc-latency-2393/latency-svc-895d9 updated: 0 ports\nI1006 23:58:40.050506       1 service.go:301] Service svc-latency-2393/latency-svc-8bv9v updated: 0 ports\nI1006 23:58:40.070608       1 service.go:301] Service svc-latency-2393/latency-svc-8cjfh updated: 0 ports\nI1006 23:58:40.086712       1 service.go:301] Service svc-latency-2393/latency-svc-8lxw5 updated: 0 ports\nI1006 23:58:40.097713       1 service.go:301] Service svc-latency-2393/latency-svc-8p67f updated: 0 ports\nI1006 23:58:40.110804       1 service.go:301] Service svc-latency-2393/latency-svc-8rhqc updated: 0 ports\nI1006 23:58:40.124279       1 service.go:301] Service svc-latency-2393/latency-svc-8t9bn updated: 0 ports\nI1006 23:58:40.139835       1 service.go:301] Service svc-latency-2393/latency-svc-8tzdd updated: 0 ports\nI1006 23:58:40.148796       1 service.go:301] Service svc-latency-2393/latency-svc-95dd8 updated: 0 ports\nI1006 23:58:40.177167       1 service.go:301] Service svc-latency-2393/latency-svc-9jfgq updated: 0 ports\nI1006 23:58:40.197893       1 service.go:301] Service svc-latency-2393/latency-svc-9mctz updated: 0 ports\nI1006 23:58:40.215012       1 service.go:301] Service svc-latency-2393/latency-svc-9vlxv updated: 0 ports\nI1006 23:58:40.228503       1 service.go:301] Service svc-latency-2393/latency-svc-9wr74 updated: 0 ports\nI1006 23:58:40.265955       1 service.go:301] Service svc-latency-2393/latency-svc-9z82n updated: 0 ports\nI1006 23:58:40.266003       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-86bkf\"\nI1006 23:58:40.266022       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8lxw5\"\nI1006 23:58:40.266037       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4sq28\"\nI1006 23:58:40.266049       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-5xvq5\"\nI1006 23:58:40.266063       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6fzt9\"\nI1006 23:58:40.266074       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-644q4\"\nI1006 23:58:40.266084       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6tq7k\"\nI1006 23:58:40.266093       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9mctz\"\nI1006 23:58:40.266103       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4wc8j\"\nI1006 23:58:40.266112       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-7qttt\"\nI1006 23:58:40.266121       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9wr74\"\nI1006 23:58:40.266130       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6f9l6\"\nI1006 23:58:40.266137       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-78249\"\nI1006 23:58:40.266147       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-7f76x\"\nI1006 23:58:40.266155       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6wvqq\"\nI1006 23:58:40.266163       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8t9bn\"\nI1006 23:58:40.266176       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4qvp2\"\nI1006 23:58:40.266185       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-5jh9j\"\nI1006 23:58:40.266196       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-5z6dn\"\nI1006 23:58:40.266207       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8cjfh\"\nI1006 23:58:40.266216       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8p67f\"\nI1006 23:58:40.266224       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-65x5t\"\nI1006 23:58:40.266233       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-849zs\"\nI1006 23:58:40.266241       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8bv9v\"\nI1006 23:58:40.266250       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-88vj8\"\nI1006 23:58:40.266261       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-895d9\"\nI1006 23:58:40.266269       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8rhqc\"\nI1006 23:58:40.266277       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-8tzdd\"\nI1006 23:58:40.266285       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9vlxv\"\nI1006 23:58:40.266293       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-4rntd\"\nI1006 23:58:40.266301       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-57dmk\"\nI1006 23:58:40.266310       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6n5j5\"\nI1006 23:58:40.266319       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9z82n\"\nI1006 23:58:40.266328       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-84jwb\"\nI1006 23:58:40.266336       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-95dd8\"\nI1006 23:58:40.266345       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9jfgq\"\nI1006 23:58:40.266354       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6s79h\"\nI1006 23:58:40.266362       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-6vb2d\"\nI1006 23:58:40.266370       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-72blr\"\nI1006 23:58:40.267127       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:40.303949       1 service.go:301] Service svc-latency-2393/latency-svc-9znmp updated: 0 ports\nI1006 23:58:40.328604       1 service.go:301] Service svc-latency-2393/latency-svc-b47ct updated: 0 ports\nI1006 23:58:40.340004       1 service.go:301] Service svc-latency-2393/latency-svc-b8cp5 updated: 0 ports\nI1006 23:58:40.372878       1 service.go:301] Service svc-latency-2393/latency-svc-bfk2f updated: 0 ports\nI1006 23:58:40.397768       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"131.745862ms\"\nI1006 23:58:40.404353       1 service.go:301] Service svc-latency-2393/latency-svc-bgk5c updated: 0 ports\nI1006 23:58:40.439255       1 service.go:301] Service svc-latency-2393/latency-svc-bhkxf updated: 0 ports\nI1006 23:58:40.470579       1 service.go:301] Service svc-latency-2393/latency-svc-bjp7n updated: 0 ports\nI1006 23:58:40.517043       1 service.go:301] Service svc-latency-2393/latency-svc-bqrrs updated: 0 ports\nI1006 23:58:40.562835       1 service.go:301] Service svc-latency-2393/latency-svc-brlrb updated: 0 ports\nI1006 23:58:40.596289       1 service.go:301] Service svc-latency-2393/latency-svc-c5kdc updated: 0 ports\nI1006 23:58:40.612669       1 service.go:301] Service svc-latency-2393/latency-svc-c8968 updated: 0 ports\nI1006 23:58:40.632662       1 service.go:301] Service svc-latency-2393/latency-svc-c9bqk updated: 0 ports\nI1006 23:58:40.640942       1 service.go:301] Service svc-latency-2393/latency-svc-ccshc updated: 0 ports\nI1006 23:58:40.652511       1 service.go:301] Service svc-latency-2393/latency-svc-ck88l updated: 0 ports\nI1006 23:58:40.676703       1 service.go:301] Service svc-latency-2393/latency-svc-cl57v updated: 0 ports\nI1006 23:58:40.687420       1 service.go:301] Service svc-latency-2393/latency-svc-cwcq2 updated: 0 ports\nI1006 23:58:40.717384       1 service.go:301] Service svc-latency-2393/latency-svc-cwhx4 updated: 0 ports\nI1006 23:58:40.741895       1 service.go:301] Service svc-latency-2393/latency-svc-d4nf7 updated: 0 ports\nI1006 23:58:40.754549       1 service.go:301] Service svc-latency-2393/latency-svc-d7x2s updated: 0 ports\nI1006 23:58:40.780799       1 service.go:301] Service svc-latency-2393/latency-svc-dd9dt updated: 0 ports\nI1006 23:58:40.803616       1 service.go:301] Service svc-latency-2393/latency-svc-dhlgm updated: 0 ports\nI1006 23:58:40.819394       1 service.go:301] Service svc-latency-2393/latency-svc-dsrz7 updated: 0 ports\nI1006 23:58:40.827524       1 service.go:301] Service svc-latency-2393/latency-svc-dtwqp updated: 0 ports\nI1006 23:58:40.842039       1 service.go:301] Service svc-latency-2393/latency-svc-f5vlj updated: 0 ports\nI1006 23:58:40.869021       1 service.go:301] Service svc-latency-2393/latency-svc-f8mr2 updated: 0 ports\nI1006 23:58:40.881932       1 service.go:301] Service svc-latency-2393/latency-svc-f9f5k updated: 0 ports\nI1006 23:58:40.899425       1 service.go:301] Service svc-latency-2393/latency-svc-fmp5f updated: 0 ports\nI1006 23:58:40.928310       1 service.go:301] Service svc-latency-2393/latency-svc-fq2rs updated: 0 ports\nI1006 23:58:40.942434       1 service.go:301] Service svc-latency-2393/latency-svc-ftpls updated: 0 ports\nI1006 23:58:40.960212       1 service.go:301] Service svc-latency-2393/latency-svc-fvxws updated: 0 ports\nI1006 23:58:41.004844       1 service.go:301] Service svc-latency-2393/latency-svc-fztg4 updated: 0 ports\nI1006 23:58:41.035689       1 service.go:301] Service svc-latency-2393/latency-svc-g9zgv updated: 0 ports\nI1006 23:58:41.049879       1 service.go:301] Service svc-latency-2393/latency-svc-ggr9f updated: 0 ports\nI1006 23:58:41.067951       1 service.go:301] Service svc-latency-2393/latency-svc-ghnmt updated: 0 ports\nI1006 23:58:41.078572       1 service.go:301] Service svc-latency-2393/latency-svc-gk844 updated: 0 ports\nI1006 23:58:41.097019       1 service.go:301] Service svc-latency-2393/latency-svc-gnp8t updated: 0 ports\nI1006 23:58:41.122330       1 service.go:301] Service svc-latency-2393/latency-svc-gv4lt updated: 0 ports\nI1006 23:58:41.138008       1 service.go:301] Service svc-latency-2393/latency-svc-h467x updated: 0 ports\nI1006 23:58:41.151369       1 service.go:301] Service svc-latency-2393/latency-svc-hbrxx updated: 0 ports\nI1006 23:58:41.167292       1 service.go:301] Service svc-latency-2393/latency-svc-hfbbl updated: 0 ports\nI1006 23:58:41.176641       1 service.go:301] Service svc-latency-2393/latency-svc-hr77g updated: 0 ports\nI1006 23:58:41.209794       1 service.go:301] Service svc-latency-2393/latency-svc-j7zmj updated: 0 ports\nI1006 23:58:41.240645       1 service.go:301] Service svc-latency-2393/latency-svc-jbm7q updated: 0 ports\nI1006 23:58:41.240705       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-fvxws\"\nI1006 23:58:41.240848       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ggr9f\"\nI1006 23:58:41.240858       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-jbm7q\"\nI1006 23:58:41.240867       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-f8mr2\"\nI1006 23:58:41.240875       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-b47ct\"\nI1006 23:58:41.240984       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-cl57v\"\nI1006 23:58:41.241073       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-d7x2s\"\nI1006 23:58:41.241091       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ck88l\"\nI1006 23:58:41.241100       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-dtwqp\"\nI1006 23:58:41.241178       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-f9f5k\"\nI1006 23:58:41.241197       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-fmp5f\"\nI1006 23:58:41.241254       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-fq2rs\"\nI1006 23:58:41.241271       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-bjp7n\"\nI1006 23:58:41.241329       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-bqrrs\"\nI1006 23:58:41.241410       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-c9bqk\"\nI1006 23:58:41.241597       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-fztg4\"\nI1006 23:58:41.241616       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-hfbbl\"\nI1006 23:58:41.241625       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-cwhx4\"\nI1006 23:58:41.241633       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-dd9dt\"\nI1006 23:58:41.241641       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-b8cp5\"\nI1006 23:58:41.241732       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-brlrb\"\nI1006 23:58:41.241751       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ccshc\"\nI1006 23:58:41.241760       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ghnmt\"\nI1006 23:58:41.241768       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-gk844\"\nI1006 23:58:41.241776       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-gnp8t\"\nI1006 23:58:41.241784       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-hbrxx\"\nI1006 23:58:41.241805       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-bfk2f\"\nI1006 23:58:41.241815       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-bhkxf\"\nI1006 23:58:41.241824       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-cwcq2\"\nI1006 23:58:41.241832       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-dsrz7\"\nI1006 23:58:41.241840       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ftpls\"\nI1006 23:58:41.241937       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-g9zgv\"\nI1006 23:58:41.241963       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-d4nf7\"\nI1006 23:58:41.242041       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-f5vlj\"\nI1006 23:58:41.242122       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-gv4lt\"\nI1006 23:58:41.242141       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-h467x\"\nI1006 23:58:41.242151       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-hr77g\"\nI1006 23:58:41.242160       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-9znmp\"\nI1006 23:58:41.242169       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-bgk5c\"\nI1006 23:58:41.242258       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-c5kdc\"\nI1006 23:58:41.242267       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-c8968\"\nI1006 23:58:41.242275       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-dhlgm\"\nI1006 23:58:41.242283       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-j7zmj\"\nI1006 23:58:41.243199       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:41.265425       1 service.go:301] Service svc-latency-2393/latency-svc-jmqfb updated: 0 ports\nI1006 23:58:41.293038       1 service.go:301] Service svc-latency-2393/latency-svc-jn2lf updated: 0 ports\nI1006 23:58:41.313267       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"72.547354ms\"\nI1006 23:58:41.318994       1 service.go:301] Service svc-latency-2393/latency-svc-jw9bd updated: 0 ports\nI1006 23:58:41.344189       1 service.go:301] Service svc-latency-2393/latency-svc-k4jsf updated: 0 ports\nI1006 23:58:41.396054       1 service.go:301] Service svc-latency-2393/latency-svc-kgc4n updated: 0 ports\nI1006 23:58:41.494222       1 service.go:301] Service svc-latency-2393/latency-svc-kmgds updated: 0 ports\nI1006 23:58:41.550197       1 service.go:301] Service svc-latency-2393/latency-svc-kqcz4 updated: 0 ports\nI1006 23:58:41.604927       1 service.go:301] Service svc-latency-2393/latency-svc-kt9c2 updated: 0 ports\nI1006 23:58:41.628619       1 service.go:301] Service svc-latency-2393/latency-svc-kwdlc updated: 0 ports\nI1006 23:58:41.639635       1 service.go:301] Service svc-latency-2393/latency-svc-l2rmx updated: 0 ports\nI1006 23:58:41.653599       1 service.go:301] Service svc-latency-2393/latency-svc-ldldr updated: 0 ports\nI1006 23:58:41.680331       1 service.go:301] Service svc-latency-2393/latency-svc-ldq82 updated: 0 ports\nI1006 23:58:41.704371       1 service.go:301] Service svc-latency-2393/latency-svc-ljwk7 updated: 0 ports\nI1006 23:58:41.738872       1 service.go:301] Service svc-latency-2393/latency-svc-lkcg4 updated: 0 ports\nI1006 23:58:41.762369       1 service.go:301] Service svc-latency-2393/latency-svc-lmhqf updated: 0 ports\nI1006 23:58:41.786832       1 service.go:301] Service svc-latency-2393/latency-svc-lz54r updated: 0 ports\nI1006 23:58:41.801663       1 service.go:301] Service svc-latency-2393/latency-svc-m5g7n updated: 0 ports\nI1006 23:58:41.819964       1 service.go:301] Service svc-latency-2393/latency-svc-mjtlp updated: 0 ports\nI1006 23:58:41.830958       1 service.go:301] Service svc-latency-2393/latency-svc-ml7jn updated: 0 ports\nI1006 23:58:41.854901       1 service.go:301] Service svc-latency-2393/latency-svc-mpn8x updated: 0 ports\nI1006 23:58:41.875678       1 service.go:301] Service svc-latency-2393/latency-svc-mqgrv updated: 0 ports\nI1006 23:58:41.895577       1 service.go:301] Service svc-latency-2393/latency-svc-n4pq2 updated: 0 ports\nI1006 23:58:41.913139       1 service.go:301] Service svc-latency-2393/latency-svc-n7qcb updated: 0 ports\nI1006 23:58:41.932238       1 service.go:301] Service svc-latency-2393/latency-svc-n7xzt updated: 0 ports\nI1006 23:58:41.969997       1 service.go:301] Service svc-latency-2393/latency-svc-nb9qf updated: 0 ports\nI1006 23:58:41.993407       1 service.go:301] Service svc-latency-2393/latency-svc-ncgrr updated: 0 ports\nI1006 23:58:42.004994       1 service.go:301] Service svc-latency-2393/latency-svc-nkxnz updated: 0 ports\nI1006 23:58:42.024462       1 service.go:301] Service svc-latency-2393/latency-svc-nr2rl updated: 0 ports\nI1006 23:58:42.070832       1 service.go:301] Service svc-latency-2393/latency-svc-ntzqt updated: 0 ports\nI1006 23:58:42.137426       1 service.go:301] Service svc-latency-2393/latency-svc-nzkkj updated: 0 ports\nI1006 23:58:42.172519       1 service.go:301] Service svc-latency-2393/latency-svc-p5gsf updated: 0 ports\nI1006 23:58:42.230773       1 service.go:301] Service svc-latency-2393/latency-svc-p6hmv updated: 0 ports\nI1006 23:58:42.308209       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-mjtlp\"\nI1006 23:58:42.308248       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-jn2lf\"\nI1006 23:58:42.308258       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-jw9bd\"\nI1006 23:58:42.308267       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-kmgds\"\nI1006 23:58:42.308275       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-lz54r\"\nI1006 23:58:42.308284       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-kt9c2\"\nI1006 23:58:42.308293       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-n7qcb\"\nI1006 23:58:42.308302       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-nr2rl\"\nI1006 23:58:42.308311       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-nzkkj\"\nI1006 23:58:42.308319       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-k4jsf\"\nI1006 23:58:42.308327       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-mqgrv\"\nI1006 23:58:42.308338       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-n4pq2\"\nI1006 23:58:42.308349       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-nkxnz\"\nI1006 23:58:42.308362       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-p6hmv\"\nI1006 23:58:42.308371       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-kgc4n\"\nI1006 23:58:42.308380       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ml7jn\"\nI1006 23:58:42.308412       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-n7xzt\"\nI1006 23:58:42.308429       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ntzqt\"\nI1006 23:58:42.308445       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-kwdlc\"\nI1006 23:58:42.308455       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-mpn8x\"\nI1006 23:58:42.308464       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ncgrr\"\nI1006 23:58:42.308474       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-lmhqf\"\nI1006 23:58:42.308483       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-m5g7n\"\nI1006 23:58:42.308493       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-nb9qf\"\nI1006 23:58:42.308505       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-kqcz4\"\nI1006 23:58:42.308517       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ldldr\"\nI1006 23:58:42.308533       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ldq82\"\nI1006 23:58:42.308559       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ljwk7\"\nI1006 23:58:42.308573       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-jmqfb\"\nI1006 23:58:42.308581       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-l2rmx\"\nI1006 23:58:42.308590       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-lkcg4\"\nI1006 23:58:42.308599       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-p5gsf\"\nI1006 23:58:42.309345       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:42.323236       1 service.go:301] Service svc-latency-2393/latency-svc-ph7hq updated: 0 ports\nI1006 23:58:42.373205       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"64.992963ms\"\nI1006 23:58:42.492451       1 service.go:301] Service svc-latency-2393/latency-svc-pjp4l updated: 0 ports\nI1006 23:58:42.549936       1 service.go:301] Service svc-latency-2393/latency-svc-pkdj4 updated: 0 ports\nI1006 23:58:42.645583       1 service.go:301] Service svc-latency-2393/latency-svc-ptrkr updated: 0 ports\nI1006 23:58:42.667877       1 service.go:301] Service svc-latency-2393/latency-svc-pws45 updated: 0 ports\nI1006 23:58:42.691169       1 service.go:301] Service svc-latency-2393/latency-svc-q5pc8 updated: 0 ports\nI1006 23:58:42.709558       1 service.go:301] Service svc-latency-2393/latency-svc-q6rjz updated: 0 ports\nI1006 23:58:42.746165       1 service.go:301] Service svc-latency-2393/latency-svc-q7xnp updated: 0 ports\nI1006 23:58:42.780914       1 service.go:301] Service svc-latency-2393/latency-svc-q84bn updated: 0 ports\nI1006 23:58:42.800387       1 service.go:301] Service svc-latency-2393/latency-svc-q8npc updated: 0 ports\nI1006 23:58:42.821374       1 service.go:301] Service svc-latency-2393/latency-svc-qrk2d updated: 0 ports\nI1006 23:58:42.830804       1 service.go:301] Service svc-latency-2393/latency-svc-qs9qp updated: 0 ports\nI1006 23:58:42.849662       1 service.go:301] Service svc-latency-2393/latency-svc-qsxlg updated: 0 ports\nI1006 23:58:42.873532       1 service.go:301] Service svc-latency-2393/latency-svc-r57pf updated: 0 ports\nI1006 23:58:42.896214       1 service.go:301] Service svc-latency-2393/latency-svc-r5l7x updated: 0 ports\nI1006 23:58:42.926790       1 service.go:301] Service svc-latency-2393/latency-svc-r5wxn updated: 0 ports\nI1006 23:58:42.954121       1 service.go:301] Service svc-latency-2393/latency-svc-r76jp updated: 0 ports\nI1006 23:58:43.012252       1 service.go:301] Service svc-latency-2393/latency-svc-rbgp8 updated: 0 ports\nI1006 23:58:43.045872       1 service.go:301] Service svc-latency-2393/latency-svc-rbqdn updated: 0 ports\nI1006 23:58:43.078378       1 service.go:301] Service svc-latency-2393/latency-svc-rktpx updated: 0 ports\nI1006 23:58:43.092918       1 service.go:301] Service svc-latency-2393/latency-svc-rmhph updated: 0 ports\nI1006 23:58:43.108835       1 service.go:301] Service svc-latency-2393/latency-svc-rq2f4 updated: 0 ports\nI1006 23:58:43.152144       1 service.go:301] Service svc-latency-2393/latency-svc-rtrd7 updated: 0 ports\nI1006 23:58:43.196343       1 service.go:301] Service svc-latency-2393/latency-svc-rvbxj updated: 0 ports\nI1006 23:58:43.212977       1 service.go:301] Service svc-latency-2393/latency-svc-rvg8x updated: 0 ports\nI1006 23:58:43.238014       1 service.go:301] Service svc-latency-2393/latency-svc-s5x4w updated: 0 ports\nI1006 23:58:43.238056       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-r57pf\"\nI1006 23:58:43.238071       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-r76jp\"\nI1006 23:58:43.238081       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ph7hq\"\nI1006 23:58:43.238090       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-pjp4l\"\nI1006 23:58:43.238099       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ptrkr\"\nI1006 23:58:43.238107       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-pws45\"\nI1006 23:58:43.238115       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-qrk2d\"\nI1006 23:58:43.238123       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rvg8x\"\nI1006 23:58:43.238132       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-q5pc8\"\nI1006 23:58:43.238150       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-q8npc\"\nI1006 23:58:43.238158       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-qs9qp\"\nI1006 23:58:43.238166       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-qsxlg\"\nI1006 23:58:43.238175       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rtrd7\"\nI1006 23:58:43.238184       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-s5x4w\"\nI1006 23:58:43.238192       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-pkdj4\"\nI1006 23:58:43.238200       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-q7xnp\"\nI1006 23:58:43.238209       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rbgp8\"\nI1006 23:58:43.238217       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rq2f4\"\nI1006 23:58:43.238225       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rvbxj\"\nI1006 23:58:43.238232       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rktpx\"\nI1006 23:58:43.238240       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rmhph\"\nI1006 23:58:43.238251       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-q6rjz\"\nI1006 23:58:43.238259       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-q84bn\"\nI1006 23:58:43.238268       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-r5l7x\"\nI1006 23:58:43.238275       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-r5wxn\"\nI1006 23:58:43.238283       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-rbqdn\"\nI1006 23:58:43.238702       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:43.273871       1 service.go:301] Service svc-latency-2393/latency-svc-s7pjz updated: 0 ports\nI1006 23:58:43.304527       1 service.go:301] Service svc-latency-2393/latency-svc-sg2w2 updated: 0 ports\nI1006 23:58:43.340842       1 service.go:301] Service svc-latency-2393/latency-svc-sg7mj updated: 0 ports\nI1006 23:58:43.355563       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"117.492968ms\"\nI1006 23:58:43.398596       1 service.go:301] Service svc-latency-2393/latency-svc-sm4rn updated: 0 ports\nI1006 23:58:43.429174       1 service.go:301] Service svc-latency-2393/latency-svc-snxws updated: 0 ports\nI1006 23:58:43.442984       1 service.go:301] Service svc-latency-2393/latency-svc-sqjb2 updated: 0 ports\nI1006 23:58:43.461069       1 service.go:301] Service svc-latency-2393/latency-svc-svvp8 updated: 0 ports\nI1006 23:58:43.477283       1 service.go:301] Service svc-latency-2393/latency-svc-sx9kt updated: 0 ports\nI1006 23:58:43.506548       1 service.go:301] Service svc-latency-2393/latency-svc-sxl4j updated: 0 ports\nI1006 23:58:43.534108       1 service.go:301] Service svc-latency-2393/latency-svc-t5tpd updated: 0 ports\nI1006 23:58:43.552554       1 service.go:301] Service svc-latency-2393/latency-svc-t7d5t updated: 0 ports\nI1006 23:58:43.577641       1 service.go:301] Service svc-latency-2393/latency-svc-t7dgc updated: 0 ports\nI1006 23:58:43.595416       1 service.go:301] Service svc-latency-2393/latency-svc-t879p updated: 0 ports\nI1006 23:58:43.612184       1 service.go:301] Service svc-latency-2393/latency-svc-thr85 updated: 0 ports\nI1006 23:58:43.637439       1 service.go:301] Service svc-latency-2393/latency-svc-tp5sl updated: 0 ports\nI1006 23:58:43.649319       1 service.go:301] Service svc-latency-2393/latency-svc-trffr updated: 0 ports\nI1006 23:58:43.675044       1 service.go:301] Service svc-latency-2393/latency-svc-twbnv updated: 0 ports\nI1006 23:58:43.692627       1 service.go:301] Service svc-latency-2393/latency-svc-tx546 updated: 0 ports\nI1006 23:58:43.705948       1 service.go:301] Service svc-latency-2393/latency-svc-tzfpz updated: 0 ports\nI1006 23:58:43.736200       1 service.go:301] Service svc-latency-2393/latency-svc-v8qq2 updated: 0 ports\nI1006 23:58:43.756624       1 service.go:301] Service svc-latency-2393/latency-svc-vgb5r updated: 0 ports\nI1006 23:58:43.779170       1 service.go:301] Service svc-latency-2393/latency-svc-vmbwr updated: 0 ports\nI1006 23:58:43.800480       1 service.go:301] Service svc-latency-2393/latency-svc-vrx8c updated: 0 ports\nI1006 23:58:43.812170       1 service.go:301] Service svc-latency-2393/latency-svc-vsss7 updated: 0 ports\nI1006 23:58:43.836932       1 service.go:301] Service svc-latency-2393/latency-svc-vxmjf updated: 0 ports\nI1006 23:58:43.854263       1 service.go:301] Service svc-latency-2393/latency-svc-wm7kf updated: 0 ports\nI1006 23:58:43.884348       1 service.go:301] Service svc-latency-2393/latency-svc-wrpzl updated: 0 ports\nI1006 23:58:43.895155       1 service.go:301] Service svc-latency-2393/latency-svc-ww5ng updated: 0 ports\nI1006 23:58:43.903858       1 service.go:301] Service svc-latency-2393/latency-svc-wwvrh updated: 0 ports\nI1006 23:58:43.917943       1 service.go:301] Service svc-latency-2393/latency-svc-x7857 updated: 0 ports\nI1006 23:58:43.926620       1 service.go:301] Service svc-latency-2393/latency-svc-x96jb updated: 0 ports\nI1006 23:58:43.948634       1 service.go:301] Service svc-latency-2393/latency-svc-xdldk updated: 0 ports\nI1006 23:58:43.958350       1 service.go:301] Service svc-latency-2393/latency-svc-xg4ff updated: 0 ports\nI1006 23:58:43.971028       1 service.go:301] Service svc-latency-2393/latency-svc-xh9p9 updated: 0 ports\nI1006 23:58:43.989678       1 service.go:301] Service svc-latency-2393/latency-svc-xjw58 updated: 0 ports\nI1006 23:58:43.998946       1 service.go:301] Service svc-latency-2393/latency-svc-xpdff updated: 0 ports\nI1006 23:58:44.010246       1 service.go:301] Service svc-latency-2393/latency-svc-xqd76 updated: 0 ports\nI1006 23:58:44.020890       1 service.go:301] Service svc-latency-2393/latency-svc-xs5sp updated: 0 ports\nI1006 23:58:44.031311       1 service.go:301] Service svc-latency-2393/latency-svc-xspnn updated: 0 ports\nI1006 23:58:44.041418       1 service.go:301] Service svc-latency-2393/latency-svc-z29ch updated: 0 ports\nI1006 23:58:44.061499       1 service.go:301] Service svc-latency-2393/latency-svc-z4dpb updated: 0 ports\nI1006 23:58:44.072749       1 service.go:301] Service svc-latency-2393/latency-svc-z6dpv updated: 0 ports\nI1006 23:58:44.078434       1 service.go:301] Service svc-latency-2393/latency-svc-zgfcn updated: 0 ports\nI1006 23:58:44.087818       1 service.go:301] Service svc-latency-2393/latency-svc-zgz8f updated: 0 ports\nI1006 23:58:44.106422       1 service.go:301] Service svc-latency-2393/latency-svc-zjs27 updated: 0 ports\nI1006 23:58:44.121143       1 service.go:301] Service svc-latency-2393/latency-svc-zkx4z updated: 0 ports\nI1006 23:58:44.128768       1 service.go:301] Service svc-latency-2393/latency-svc-zr7lv updated: 0 ports\nI1006 23:58:44.232742       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sg7mj\"\nI1006 23:58:44.232781       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sqjb2\"\nI1006 23:58:44.232791       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sx9kt\"\nI1006 23:58:44.232819       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-twbnv\"\nI1006 23:58:44.232828       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-wwvrh\"\nI1006 23:58:44.232837       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-zjs27\"\nI1006 23:58:44.232846       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sm4rn\"\nI1006 23:58:44.232855       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-thr85\"\nI1006 23:58:44.232866       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-tzfpz\"\nI1006 23:58:44.232875       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xdldk\"\nI1006 23:58:44.232902       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xpdff\"\nI1006 23:58:44.232911       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xqd76\"\nI1006 23:58:44.232929       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-z29ch\"\nI1006 23:58:44.232949       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-zgz8f\"\nI1006 23:58:44.232959       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-t5tpd\"\nI1006 23:58:44.232987       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-v8qq2\"\nI1006 23:58:44.232998       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-vmbwr\"\nI1006 23:58:44.233007       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-vsss7\"\nI1006 23:58:44.233015       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-x96jb\"\nI1006 23:58:44.233023       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xg4ff\"\nI1006 23:58:44.233032       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xjw58\"\nI1006 23:58:44.233040       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-svvp8\"\nI1006 23:58:44.233080       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-t7dgc\"\nI1006 23:58:44.233090       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-trffr\"\nI1006 23:58:44.233099       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xspnn\"\nI1006 23:58:44.233108       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sxl4j\"\nI1006 23:58:44.233116       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xh9p9\"\nI1006 23:58:44.233127       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-z4dpb\"\nI1006 23:58:44.233135       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-zr7lv\"\nI1006 23:58:44.233166       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-x7857\"\nI1006 23:58:44.233177       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-s7pjz\"\nI1006 23:58:44.233186       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-sg2w2\"\nI1006 23:58:44.233194       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-snxws\"\nI1006 23:58:44.233203       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-tp5sl\"\nI1006 23:58:44.233211       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-vrx8c\"\nI1006 23:58:44.233219       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-wm7kf\"\nI1006 23:58:44.233247       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-ww5ng\"\nI1006 23:58:44.233256       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-xs5sp\"\nI1006 23:58:44.233264       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-zkx4z\"\nI1006 23:58:44.233272       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-vgb5r\"\nI1006 23:58:44.233282       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-z6dpv\"\nI1006 23:58:44.233290       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-t7d5t\"\nI1006 23:58:44.233298       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-t879p\"\nI1006 23:58:44.233308       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-tx546\"\nI1006 23:58:44.233335       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-vxmjf\"\nI1006 23:58:44.233343       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-wrpzl\"\nI1006 23:58:44.233351       1 service.go:441] Removing service port \"svc-latency-2393/latency-svc-zgfcn\"\nI1006 23:58:44.233915       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:44.315665       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"82.942165ms\"\nI1006 23:58:45.317238       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:45.361542       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.459223ms\"\nI1006 23:58:51.591515       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:51.639846       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.460629ms\"\nI1006 23:58:51.639992       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:51.683377       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"43.487127ms\"\nI1006 23:58:53.421746       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:53.477325       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"55.696402ms\"\nI1006 23:58:54.477745       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:54.532735       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"55.149717ms\"\nI1006 23:58:55.764056       1 service.go:301] Service dns-8021/test-service-2 updated: 1 ports\nI1006 23:58:55.764117       1 service.go:416] Adding new service port \"dns-8021/test-service-2:http\" at 100.64.41.120:80/TCP\nI1006 23:58:55.764258       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:55.809679       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.558997ms\"\nI1006 23:58:55.809825       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:58:55.854975       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.250183ms\"\nI1006 23:59:00.712211       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:00.782735       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"70.661286ms\"\nI1006 23:59:00.783021       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:00.799368       1 service.go:301] Service services-7315/service-headless-toggled updated: 0 ports\nI1006 23:59:00.858790       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"76.003556ms\"\nI1006 23:59:01.656640       1 service.go:301] Service services-327/nodeport-collision-1 updated: 1 ports\nI1006 23:59:01.738028       1 service.go:441] Removing service port \"services-7315/service-headless-toggled\"\nI1006 23:59:01.738291       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:01.814244       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"76.207589ms\"\nI1006 23:59:01.815940       1 service.go:301] Service services-327/nodeport-collision-2 updated: 1 ports\nI1006 23:59:02.576200       1 service.go:301] Service services-2188/externalsvc updated: 0 ports\nI1006 23:59:02.815258       1 service.go:441] Removing service port \"services-2188/externalsvc\"\nI1006 23:59:02.815455       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:02.862596       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.337811ms\"\nI1006 23:59:06.370382       1 service.go:301] Service kubectl-3758/agnhost-primary updated: 1 ports\nI1006 23:59:06.370435       1 service.go:416] Adding new service port \"kubectl-3758/agnhost-primary\" at 100.66.191.211:6379/TCP\nI1006 23:59:06.370553       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:06.428970       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"58.523049ms\"\nI1006 23:59:06.429113       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:06.493153       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"64.130034ms\"\nI1006 23:59:08.432614       1 service.go:301] Service apply-2176/test-svc updated: 1 ports\nI1006 23:59:08.432676       1 service.go:416] Adding new service port \"apply-2176/test-svc\" at 100.71.29.57:8080/UDP\nI1006 23:59:08.432833       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:08.515695       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"83.015042ms\"\nI1006 23:59:09.092828       1 service.go:301] Service services-6046/nodeport-range-test updated: 1 ports\nI1006 23:59:09.092875       1 service.go:416] Adding new service port \"services-6046/nodeport-range-test\" at 100.66.215.184:80/TCP\nI1006 23:59:09.092992       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:09.145878       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-6046/nodeport-range-test\\\" (:32015/tcp4)\"\nI1006 23:59:09.152684       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"59.798218ms\"\nI1006 23:59:09.251852       1 service.go:301] Service services-6046/nodeport-range-test updated: 0 ports\nI1006 23:59:10.153014       1 service.go:441] Removing service port \"services-6046/nodeport-range-test\"\nI1006 23:59:10.153164       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:10.213805       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"60.793093ms\"\nI1006 23:59:11.214173       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:11.265273       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"51.203936ms\"\nI1006 23:59:12.103998       1 service.go:301] Service kubectl-3758/agnhost-primary updated: 0 ports\nI1006 23:59:12.104044       1 service.go:441] Removing service port \"kubectl-3758/agnhost-primary\"\nI1006 23:59:12.104162       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:12.149237       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.182729ms\"\nI1006 23:59:13.149552       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:13.197091       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.648201ms\"\nI1006 23:59:13.709415       1 service.go:301] Service apply-2176/test-svc updated: 0 ports\nI1006 23:59:13.709462       1 service.go:441] Removing service port \"apply-2176/test-svc\"\nI1006 23:59:13.709639       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:13.763613       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"54.138899ms\"\nI1006 23:59:26.630026       1 service.go:301] Service services-542/test-service-hsr9n updated: 1 ports\nI1006 23:59:26.630091       1 service.go:416] Adding new service port \"services-542/test-service-hsr9n:http\" at 100.70.119.72:80/TCP\nI1006 23:59:26.630233       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:26.677099       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.002224ms\"\nI1006 23:59:26.874436       1 service.go:301] Service services-542/test-service-hsr9n updated: 1 ports\nI1006 23:59:26.874499       1 service.go:418] Updating existing service port \"services-542/test-service-hsr9n:http\" at 100.70.119.72:80/TCP\nI1006 23:59:26.874613       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:26.923272       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.766406ms\"\nI1006 23:59:27.144077       1 service.go:301] Service services-542/test-service-hsr9n updated: 0 ports\nI1006 23:59:27.923762       1 service.go:441] Removing service port \"services-542/test-service-hsr9n:http\"\nI1006 23:59:27.923924       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:28.084755       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"160.99372ms\"\nI1006 23:59:28.526298       1 service.go:301] Service webhook-3056/e2e-test-webhook updated: 1 ports\nI1006 23:59:29.085621       1 service.go:416] Adding new service port \"webhook-3056/e2e-test-webhook\" at 100.71.229.72:8443/TCP\nI1006 23:59:29.085795       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:29.144873       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"59.28712ms\"\nI1006 23:59:30.100657       1 service.go:301] Service webhook-3056/e2e-test-webhook updated: 0 ports\nI1006 23:59:30.100700       1 service.go:441] Removing service port \"webhook-3056/e2e-test-webhook\"\nI1006 23:59:30.100826       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:30.162738       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"62.021044ms\"\nI1006 23:59:31.163069       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:31.221249       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"58.326743ms\"\nI1006 23:59:33.151759       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:33.183894       1 service.go:301] Service dns-8021/test-service-2 updated: 0 ports\nI1006 23:59:33.200104       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.468412ms\"\nI1006 23:59:33.200144       1 service.go:441] Removing service port \"dns-8021/test-service-2:http\"\nI1006 23:59:33.200632       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:33.246405       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.255223ms\"\nI1006 23:59:34.246918       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:34.289403       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"42.579556ms\"\nI1006 23:59:39.807479       1 service.go:301] Service services-9864/nodeport-update-service updated: 1 ports\nI1006 23:59:39.807542       1 service.go:416] Adding new service port \"services-9864/nodeport-update-service\" at 100.67.203.79:80/TCP\nI1006 23:59:39.807657       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:39.875681       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"68.128141ms\"\nI1006 23:59:39.875830       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:39.890304       1 service.go:301] Service services-9864/nodeport-update-service updated: 1 ports\nI1006 23:59:39.973678       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"97.948249ms\"\nI1006 23:59:40.974856       1 service.go:416] Adding new service port \"services-9864/nodeport-update-service:tcp-port\" at 100.67.203.79:80/TCP\nI1006 23:59:40.974896       1 service.go:441] Removing service port \"services-9864/nodeport-update-service\"\nI1006 23:59:40.975020       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:41.031610       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-9864/nodeport-update-service:tcp-port\\\" (:31477/tcp4)\"\nI1006 23:59:41.047734       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"72.897156ms\"\nI1006 23:59:45.712500       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:45.759562       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.201153ms\"\nI1006 23:59:45.759722       1 proxier.go:845] \"Syncing iptables rules\"\nI1006 23:59:45.804114       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"44.508847ms\"\nI1007 00:00:03.780793       1 service.go:301] Service services-9864/nodeport-update-service updated: 2 ports\nI1007 00:00:03.780847       1 service.go:418] Updating existing service port \"services-9864/nodeport-update-service:tcp-port\" at 100.67.203.79:80/TCP\nI1007 00:00:03.780865       1 service.go:416] Adding new service port \"services-9864/nodeport-update-service:udp-port\" at 100.67.203.79:80/UDP\nI1007 00:00:03.780979       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:03.824896       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-9864/nodeport-update-service:udp-port\\\" (:31794/udp4)\"\nI1007 00:00:03.831685       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.833931ms\"\nI1007 00:00:03.832053       1 proxier.go:829] \"Stale service\" protocol=\"udp\" svcPortName=\"services-9864/nodeport-update-service:udp-port\" clusterIP=\"100.67.203.79\"\nI1007 00:00:03.832158       1 proxier.go:839] Stale udp service NodePort services-9864/nodeport-update-service:udp-port -> 31794\nI1007 00:00:03.832237       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:03.921642       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"89.907642ms\"\nI1007 00:00:09.768304       1 service.go:301] Service deployment-6782/test-rolling-update-with-lb updated: 1 ports\nI1007 00:00:09.768368       1 service.go:416] Adding new service port \"deployment-6782/test-rolling-update-with-lb\" at 100.71.8.24:80/TCP\nI1007 00:00:09.768547       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:09.833702       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for deployment-6782/test-rolling-update-with-lb\\\" (:32437/tcp4)\"\nI1007 00:00:09.841258       1 service_health.go:98] Opening healthcheck \"deployment-6782/test-rolling-update-with-lb\" on port 30696\nI1007 00:00:09.841544       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"73.176601ms\"\nI1007 00:00:09.841799       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:09.890477       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.89258ms\"\nI1007 00:00:13.974379       1 service.go:301] Service proxy-9315/proxy-service-lgkv5 updated: 4 ports\nI1007 00:00:13.974484       1 service.go:416] Adding new service port \"proxy-9315/proxy-service-lgkv5:portname1\" at 100.64.141.200:80/TCP\nI1007 00:00:13.974506       1 service.go:416] Adding new service port \"proxy-9315/proxy-service-lgkv5:portname2\" at 100.64.141.200:81/TCP\nI1007 00:00:13.974519       1 service.go:416] Adding new service port \"proxy-9315/proxy-service-lgkv5:tlsportname1\" at 100.64.141.200:443/TCP\nI1007 00:00:13.974531       1 service.go:416] Adding new service port \"proxy-9315/proxy-service-lgkv5:tlsportname2\" at 100.64.141.200:444/TCP\nI1007 00:00:13.974666       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:14.022919       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.474842ms\"\nI1007 00:00:14.058047       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:14.118804       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"60.920718ms\"\nI1007 00:00:16.541187       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:16.612316       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"71.259087ms\"\nI1007 00:00:16.612673       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:16.685184       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"72.823759ms\"\nI1007 00:00:18.636165       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:18.685033       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.01942ms\"\nI1007 00:00:21.324133       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:21.400052       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"76.067499ms\"\nI1007 00:00:21.400229       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:21.450080       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.982421ms\"\nW1007 00:00:21.761978       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingvjssq\nW1007 00:00:21.788901       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing6pscf\nW1007 00:00:21.812724       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing97s62\nW1007 00:00:21.965388       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing97s62\nW1007 00:00:22.017338       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing97s62\nW1007 00:00:22.046154       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing97s62\nW1007 00:00:22.120655       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing6pscf\nW1007 00:00:22.124247       1 endpoints.go:274] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingvjssq\nI1007 00:00:24.113041       1 service.go:301] Service webhook-8158/e2e-test-webhook updated: 1 ports\nI1007 00:00:24.113109       1 service.go:416] Adding new service port \"webhook-8158/e2e-test-webhook\" at 100.67.178.63:8443/TCP\nI1007 00:00:24.113235       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:24.160361       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.248517ms\"\nI1007 00:00:24.160563       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:24.206745       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.334451ms\"\nI1007 00:00:26.636151       1 service.go:301] Service proxy-9315/proxy-service-lgkv5 updated: 0 ports\nI1007 00:00:26.636218       1 service.go:441] Removing service port \"proxy-9315/proxy-service-lgkv5:portname1\"\nI1007 00:00:26.636238       1 service.go:441] Removing service port \"proxy-9315/proxy-service-lgkv5:portname2\"\nI1007 00:00:26.636264       1 service.go:441] Removing service port \"proxy-9315/proxy-service-lgkv5:tlsportname1\"\nI1007 00:00:26.636277       1 service.go:441] Removing service port \"proxy-9315/proxy-service-lgkv5:tlsportname2\"\nI1007 00:00:26.636586       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:26.685409       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.183471ms\"\nI1007 00:00:26.689868       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:26.734076       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"44.289951ms\"\nI1007 00:00:36.782067       1 service.go:301] Service services-9864/nodeport-update-service updated: 0 ports\nI1007 00:00:36.782112       1 service.go:441] Removing service port \"services-9864/nodeport-update-service:udp-port\"\nI1007 00:00:36.782130       1 service.go:441] Removing service port \"services-9864/nodeport-update-service:tcp-port\"\nI1007 00:00:36.782247       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:36.838153       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"56.024188ms\"\nI1007 00:00:36.901171       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:36.957120       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"56.18236ms\"\nI1007 00:00:38.332096       1 service.go:301] Service webhook-8158/e2e-test-webhook updated: 0 ports\nI1007 00:00:38.332140       1 service.go:441] Removing service port \"webhook-8158/e2e-test-webhook\"\nI1007 00:00:38.332272       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:38.402542       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"70.388306ms\"\nI1007 00:00:39.403575       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:39.491067       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"87.644712ms\"\nI1007 00:00:49.133945       1 service.go:301] Service deployment-6782/test-rolling-update-with-lb updated: 1 ports\nI1007 00:00:49.134011       1 service.go:418] Updating existing service port \"deployment-6782/test-rolling-update-with-lb\" at 100.71.8.24:80/TCP\nI1007 00:00:49.134140       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:49.227765       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"93.747847ms\"\nI1007 00:00:50.166992       1 service.go:301] Service services-2/tolerate-unready updated: 1 ports\nI1007 00:00:50.167129       1 service.go:416] Adding new service port \"services-2/tolerate-unready:http\" at 100.71.27.83:80/TCP\nI1007 00:00:50.167302       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:50.226832       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"59.703573ms\"\nI1007 00:00:50.227043       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:50.323057       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"96.163572ms\"\nI1007 00:00:52.704752       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:52.752781       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"48.126419ms\"\nI1007 00:00:52.837255       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:52.882827       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.749507ms\"\nI1007 00:00:53.883307       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:53.961408       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"78.237949ms\"\nI1007 00:00:55.672344       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:55.728155       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"55.937ms\"\nI1007 00:00:55.728482       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:55.775929       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.730178ms\"\nI1007 00:00:56.776636       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:56.840199       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"63.715898ms\"\nI1007 00:00:59.104986       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:00:59.191288       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"86.410634ms\"\nI1007 00:01:02.648522       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:02.729332       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"80.985434ms\"\nI1007 00:01:02.729518       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:02.809431       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"80.047078ms\"\nI1007 00:01:04.210207       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:04.259949       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.954496ms\"\nI1007 00:01:07.214827       1 service.go:301] Service webhook-7555/e2e-test-webhook updated: 1 ports\nI1007 00:01:07.214885       1 service.go:416] Adding new service port \"webhook-7555/e2e-test-webhook\" at 100.68.71.133:8443/TCP\nI1007 00:01:07.215028       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:07.269737       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"54.852146ms\"\nI1007 00:01:07.270014       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:07.311388       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"41.608846ms\"\nI1007 00:01:08.474147       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:08.521421       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.397488ms\"\nI1007 00:01:08.792223       1 service.go:301] Service webhook-7555/e2e-test-webhook updated: 0 ports\nI1007 00:01:09.522339       1 service.go:441] Removing service port \"webhook-7555/e2e-test-webhook\"\nI1007 00:01:09.522582       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:09.572840       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.504866ms\"\nI1007 00:01:13.232825       1 service.go:301] Service webhook-6349/e2e-test-webhook updated: 1 ports\nI1007 00:01:13.232879       1 service.go:416] Adding new service port \"webhook-6349/e2e-test-webhook\" at 100.65.69.127:8443/TCP\nI1007 00:01:13.233012       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:13.372908       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"140.016046ms\"\nI1007 00:01:13.373069       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:13.456368       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"83.40785ms\"\nI1007 00:01:14.296886       1 service.go:301] Service services-2/tolerate-unready updated: 0 ports\nI1007 00:01:14.296935       1 service.go:441] Removing service port \"services-2/tolerate-unready:http\"\nI1007 00:01:14.297087       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:14.360830       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"63.882729ms\"\nI1007 00:01:15.361340       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:15.407123       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.882447ms\"\nI1007 00:01:16.247296       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:16.313394       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"66.237043ms\"\nI1007 00:01:20.219107       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:20.275803       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"56.81341ms\"\nI1007 00:01:20.278420       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:20.324582       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.24892ms\"\nI1007 00:01:21.325145       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:21.387879       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"62.962397ms\"\nI1007 00:01:31.040530       1 service.go:301] Service webhook-6349/e2e-test-webhook updated: 0 ports\nI1007 00:01:31.040582       1 service.go:441] Removing service port \"webhook-6349/e2e-test-webhook\"\nI1007 00:01:31.040711       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:31.091292       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.70103ms\"\nI1007 00:01:31.165877       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:31.211569       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.781667ms\"\nI1007 00:01:32.212380       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:32.275889       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"63.791812ms\"\nI1007 00:01:34.161595       1 service.go:301] Service services-9062/hairpin-test updated: 1 ports\nI1007 00:01:34.161657       1 service.go:416] Adding new service port \"services-9062/hairpin-test\" at 100.71.18.41:8080/TCP\nI1007 00:01:34.161792       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:34.208810       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.152236ms\"\nI1007 00:01:34.209060       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:34.258313       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.459571ms\"\nI1007 00:01:40.566010       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:40.647516       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"81.603694ms\"\nI1007 00:01:44.239571       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:44.283154       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"43.770899ms\"\nI1007 00:01:44.318774       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:44.377982       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"59.333ms\"\nI1007 00:01:45.379031       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:45.431879       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"52.959638ms\"\nI1007 00:01:47.731712       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:47.776213       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"44.617656ms\"\nI1007 00:01:47.811096       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:47.857620       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.656508ms\"\nI1007 00:01:48.858410       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:48.900603       1 service.go:301] Service conntrack-2227/svc-udp updated: 1 ports\nI1007 00:01:48.914908       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"56.682482ms\"\nI1007 00:01:49.571016       1 service.go:301] Service services-6611/multi-endpoint-test updated: 2 ports\nI1007 00:01:49.915970       1 service.go:416] Adding new service port \"conntrack-2227/svc-udp:udp\" at 100.69.243.183:80/UDP\nI1007 00:01:49.916024       1 service.go:416] Adding new service port \"services-6611/multi-endpoint-test:portname2\" at 100.71.14.54:81/TCP\nI1007 00:01:49.916040       1 service.go:416] Adding new service port \"services-6611/multi-endpoint-test:portname1\" at 100.71.14.54:80/TCP\nI1007 00:01:49.916185       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:49.959017       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for conntrack-2227/svc-udp:udp\\\" (:31123/udp4)\"\nI1007 00:01:49.967292       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"51.357448ms\"\nI1007 00:01:50.315373       1 service.go:301] Service services-9062/hairpin-test updated: 0 ports\nI1007 00:01:50.968076       1 service.go:441] Removing service port \"services-9062/hairpin-test\"\nI1007 00:01:50.968281       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:51.023674       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"55.744127ms\"\nI1007 00:01:52.809000       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:52.903784       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"94.907894ms\"\nI1007 00:01:56.448042       1 service.go:301] Service services-1219/nodeport-test updated: 1 ports\nI1007 00:01:56.448097       1 service.go:416] Adding new service port \"services-1219/nodeport-test:http\" at 100.68.145.78:80/TCP\nI1007 00:01:56.448286       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:56.519641       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-1219/nodeport-test:http\\\" (:31620/tcp4)\"\nI1007 00:01:56.536854       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"88.745417ms\"\nI1007 00:01:56.537034       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:56.622538       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"85.632849ms\"\nI1007 00:01:57.542755       1 service.go:301] Service aggregator-242/sample-api updated: 1 ports\nI1007 00:01:57.542859       1 service.go:416] Adding new service port \"aggregator-242/sample-api\" at 100.69.214.245:7443/TCP\nI1007 00:01:57.543026       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:57.638802       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"95.943532ms\"\nI1007 00:01:58.639590       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:58.690673       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"51.293522ms\"\nI1007 00:01:59.690952       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:01:59.746291       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"55.527377ms\"\nI1007 00:02:02.612160       1 service.go:301] Service conntrack-7711/boom-server updated: 1 ports\nI1007 00:02:02.612215       1 service.go:416] Adding new service port \"conntrack-7711/boom-server\" at 100.69.77.194:9000/TCP\nI1007 00:02:02.612338       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:02.676018       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"63.796149ms\"\nI1007 00:02:02.676451       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:02.725647       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"49.38785ms\"\nI1007 00:02:05.197214       1 proxier.go:829] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-2227/svc-udp:udp\" clusterIP=\"100.69.243.183\"\nI1007 00:02:05.197325       1 proxier.go:839] Stale udp service NodePort conntrack-2227/svc-udp:udp -> 31123\nI1007 00:02:05.197362       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:05.280897       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"83.892012ms\"\nI1007 00:02:06.837504       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:06.889736       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"52.370603ms\"\nI1007 00:02:06.996964       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:07.080667       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"83.826391ms\"\nI1007 00:02:07.229948       1 service.go:301] Service services-6611/multi-endpoint-test updated: 0 ports\nI1007 00:02:08.081142       1 service.go:441] Removing service port \"services-6611/multi-endpoint-test:portname1\"\nI1007 00:02:08.081183       1 service.go:441] Removing service port \"services-6611/multi-endpoint-test:portname2\"\nI1007 00:02:08.081344       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:08.172989       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"91.843893ms\"\nI1007 00:02:11.055839       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:11.111903       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"56.221715ms\"\nI1007 00:02:13.722007       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:13.772162       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.303803ms\"\nI1007 00:02:13.832590       1 service.go:301] Service aggregator-242/sample-api updated: 0 ports\nI1007 00:02:13.832650       1 service.go:441] Removing service port \"aggregator-242/sample-api\"\nI1007 00:02:13.832890       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:13.879127       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.460582ms\"\nI1007 00:02:14.879960       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:14.942987       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"63.225398ms\"\nI1007 00:02:21.195775       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:21.246242       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.627063ms\"\nI1007 00:02:21.525062       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:21.572214       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.287306ms\"\nI1007 00:02:22.572672       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:22.625292       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"52.770885ms\"\nI1007 00:02:23.910400       1 service.go:301] Service webhook-7530/e2e-test-webhook updated: 1 ports\nI1007 00:02:23.910603       1 service.go:416] Adding new service port \"webhook-7530/e2e-test-webhook\" at 100.68.253.240:8443/TCP\nI1007 00:02:23.910749       1 proxier.go:845] \"Syncing iptables rules\"\nI1007 00:02:23.956676       1 proxier.go:812] \"SyncProxyRules