Error lines from build-log.txt
... skipping 183 lines ...
Updating project ssh metadata...
.............................................Updated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci].
.done.
WARNING: No host aliases were added to your SSH configs because you do not have any running instances. Try running this command again after running some instances.
I0622 22:05:43.381255 5946 up.go:44] Cleaning up any leaked resources from previous cluster
I0622 22:05:43.381362 5946 dumplogs.go:45] /logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kops toolbox dump --name e2e-e2e-kops-gce-stable.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh2548602952/key --ssh-user prow
W0622 22:05:43.593386 5946 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0622 22:05:43.593456 5946 down.go:48] /logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kops delete cluster --name e2e-e2e-kops-gce-stable.k8s.local --yes
I0622 22:05:43.614934 5997 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0622 22:05:43.615042 5997 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-gce-stable.k8s.local" not found
I0622 22:05:43.718067 5946 gcs.go:51] gsutil ls -b -p k8s-jkns-e2e-kubeadm-gce-ci gs://k8s-jkns-e2e-kubeadm-gce-ci-state-2e
I0622 22:05:45.511596 5946 gcs.go:70] gsutil mb -p k8s-jkns-e2e-kubeadm-gce-ci gs://k8s-jkns-e2e-kubeadm-gce-ci-state-2e
Creating gs://k8s-jkns-e2e-kubeadm-gce-ci-state-2e/...
I0622 22:05:47.675593 5946 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/06/22 22:05:47 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0622 22:05:47.689558 5946 http.go:37] curl https://ip.jsb.workers.dev
I0622 22:05:47.784132 5946 up.go:159] /logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kops create cluster --name e2e-e2e-kops-gce-stable.k8s.local --cloud gce --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.25.0-alpha.1 --ssh-public-key /tmp/kops-ssh2548602952/key.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --channel=alpha --gce-service-account=default --admin-access 34.134.227.164/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-east1-b --master-size e2-standard-2 --project k8s-jkns-e2e-kubeadm-gce-ci
I0622 22:05:47.810681 6287 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0622 22:05:47.810780 6287 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
I0622 22:05:47.848732 6287 create_cluster.go:862] Using SSH public key: /tmp/kops-ssh2548602952/key.pub
I0622 22:05:48.111389 6287 new_cluster.go:425] VMs will be configured to use specified Service Account: default
... skipping 375 lines ...
I0622 22:05:54.955530 6308 keypair.go:225] Issuing new certificate: "etcd-clients-ca"
I0622 22:05:54.958351 6308 keypair.go:225] Issuing new certificate: "etcd-manager-ca-main"
W0622 22:05:55.054856 6308 vfs_castore.go:379] CA private key was not found
I0622 22:05:55.150716 6308 keypair.go:225] Issuing new certificate: "service-account"
I0622 22:05:55.152526 6308 keypair.go:225] Issuing new certificate: "kubernetes-ca"
I0622 22:06:07.691342 6308 executor.go:111] Tasks: 42 done / 68 total; 20 can run
W0622 22:06:17.605169 6308 executor.go:139] error running task "ForwardingRule/api-e2e-e2e-kops-gce-stable-k8s-local" (9m50s remaining to succeed): error creating ForwardingRule "api-e2e-e2e-kops-gce-stable-k8s-local": googleapi: Error 400: The resource 'projects/k8s-jkns-e2e-kubeadm-gce-ci/regions/us-east1/targetPools/api-e2e-e2e-kops-gce-stable-k8s-local' is not ready, resourceNotReady
I0622 22:06:17.605415 6308 executor.go:111] Tasks: 61 done / 68 total; 5 can run
I0622 22:06:24.540714 6308 executor.go:111] Tasks: 66 done / 68 total; 2 can run
I0622 22:06:35.440276 6308 executor.go:111] Tasks: 68 done / 68 total; 0 can run
I0622 22:06:35.494365 6308 update_cluster.go:326] Exporting kubeconfig for cluster
kOps has set your kubectl context to e2e-e2e-kops-gce-stable.k8s.local
... skipping 8 lines ...
I0622 22:06:45.854519 5946 up.go:243] /logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kops validate cluster --name e2e-e2e-kops-gce-stable.k8s.local --count 10 --wait 15m0s
I0622 22:06:45.876250 6326 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0622 22:06:45.876487 6326 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-e2e-kops-gce-stable.k8s.local
W0622 22:07:16.176430 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: i/o timeout
W0622 22:07:26.211273 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: connect: connection refused
W0622 22:07:36.245558 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: connect: connection refused
W0622 22:07:46.280451 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: connect: connection refused
W0622 22:07:56.315156 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: connect: connection refused
W0622 22:08:06.351221 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: connect: connection refused
W0622 22:08:16.387089 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: connect: connection refused
W0622 22:08:26.421718 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: connect: connection refused
W0622 22:08:36.456068 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: connect: connection refused
W0622 22:08:46.491957 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: connect: connection refused
W0622 22:08:56.526403 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: connect: connection refused
W0622 22:09:06.561999 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": dial tcp 34.138.125.141:443: connect: connection refused
W0622 22:09:26.596805 6326 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.138.125.141/api/v1/nodes": net/http: TLS handshake timeout
I0622 22:09:39.327204 6326 gce_cloud.go:295] Scanning zones: [us-east1-b us-east1-c us-east1-d us-east1-a]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east1-b Master e2-standard-2 1 1 us-east1
nodes-us-east1-b Node n1-standard-2 4 4 us-east1
... skipping 5 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/master-us-east1-b-xt3x machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/master-us-east1-b-xt3x" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-3xs4 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-3xs4" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-t83b machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-t83b" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vf6p machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vf6p" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vgn6 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vgn6" has not yet joined cluster
Validation Failed
W0622 22:09:40.345875 6326 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 22:09:50.765237 6326 gce_cloud.go:295] Scanning zones: [us-east1-b us-east1-c us-east1-d us-east1-a]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east1-b Master e2-standard-2 1 1 us-east1
nodes-us-east1-b Node n1-standard-2 4 4 us-east1
... skipping 7 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-3xs4 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-3xs4" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-t83b machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-t83b" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vf6p machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vf6p" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vgn6 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vgn6" has not yet joined cluster
Pod kube-system/kube-controller-manager-master-us-east1-b-xt3x system-cluster-critical pod "kube-controller-manager-master-us-east1-b-xt3x" is not ready (kube-controller-manager)
Validation Failed
W0622 22:09:51.535679 6326 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 22:10:01.936659 6326 gce_cloud.go:295] Scanning zones: [us-east1-b us-east1-c us-east1-d us-east1-a]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east1-b Master e2-standard-2 1 1 us-east1
nodes-us-east1-b Node n1-standard-2 4 4 us-east1
... skipping 6 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/master-us-east1-b-xt3x machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/master-us-east1-b-xt3x" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-3xs4 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-3xs4" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-t83b machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-t83b" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vf6p machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vf6p" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vgn6 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vgn6" has not yet joined cluster
Validation Failed
W0622 22:10:02.823355 6326 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 22:10:13.196501 6326 gce_cloud.go:295] Scanning zones: [us-east1-b us-east1-c us-east1-d us-east1-a]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east1-b Master e2-standard-2 1 1 us-east1
nodes-us-east1-b Node n1-standard-2 4 4 us-east1
... skipping 11 lines ...
Pod kube-system/cloud-controller-manager-qqhb7 system-cluster-critical pod "cloud-controller-manager-qqhb7" is pending
Pod kube-system/coredns-autoscaler-5d4dbc7b59-r8sg9 system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-r8sg9" is pending
Pod kube-system/coredns-dd657c749-6hqrr system-cluster-critical pod "coredns-dd657c749-6hqrr" is pending
Pod kube-system/dns-controller-78bc9bdd66-c5tpp system-cluster-critical pod "dns-controller-78bc9bdd66-c5tpp" is pending
Pod kube-system/kops-controller-rdhz4 system-cluster-critical pod "kops-controller-rdhz4" is pending
Validation Failed
W0622 22:10:14.098856 6326 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 22:10:24.524052 6326 gce_cloud.go:295] Scanning zones: [us-east1-b us-east1-c us-east1-d us-east1-a]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east1-b Master e2-standard-2 1 1 us-east1
nodes-us-east1-b Node n1-standard-2 4 4 us-east1
... skipping 12 lines ...
Pod kube-system/coredns-autoscaler-5d4dbc7b59-r8sg9 system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-r8sg9" is pending
Pod kube-system/coredns-dd657c749-6hqrr system-cluster-critical pod "coredns-dd657c749-6hqrr" is pending
Pod kube-system/dns-controller-78bc9bdd66-c5tpp system-cluster-critical pod "dns-controller-78bc9bdd66-c5tpp" is pending
Pod kube-system/kops-controller-rdhz4 system-cluster-critical pod "kops-controller-rdhz4" is pending
Pod kube-system/kube-scheduler-master-us-east1-b-xt3x system-cluster-critical pod "kube-scheduler-master-us-east1-b-xt3x" is pending
Validation Failed
W0622 22:10:25.447342 6326 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 22:10:35.838416 6326 gce_cloud.go:295] Scanning zones: [us-east1-b us-east1-c us-east1-d us-east1-a]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east1-b Master e2-standard-2 1 1 us-east1
nodes-us-east1-b Node n1-standard-2 4 4 us-east1
... skipping 8 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-t83b machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-t83b" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vf6p machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vf6p" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vgn6 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vgn6" has not yet joined cluster
Pod kube-system/coredns-autoscaler-5d4dbc7b59-r8sg9 system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-r8sg9" is pending
Pod kube-system/coredns-dd657c749-6hqrr system-cluster-critical pod "coredns-dd657c749-6hqrr" is pending
Validation Failed
W0622 22:10:37.646403 6326 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 22:10:48.008885 6326 gce_cloud.go:295] Scanning zones: [us-east1-b us-east1-c us-east1-d us-east1-a]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east1-b Master e2-standard-2 1 1 us-east1
nodes-us-east1-b Node n1-standard-2 4 4 us-east1
... skipping 10 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vgn6 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/nodes-us-east1-b-vgn6" has not yet joined cluster
Node master-us-east1-b-xt3x node "master-us-east1-b-xt3x" of role "master" is not ready
Pod kube-system/coredns-autoscaler-5d4dbc7b59-r8sg9 system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-r8sg9" is pending
Pod kube-system/coredns-dd657c749-6hqrr system-cluster-critical pod "coredns-dd657c749-6hqrr" is pending
Pod kube-system/metadata-proxy-v0.12-pcv9n system-node-critical pod "metadata-proxy-v0.12-pcv9n" is pending
Validation Failed
W0622 22:10:48.894429 6326 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 22:10:59.333963 6326 gce_cloud.go:295] Scanning zones: [us-east1-b us-east1-c us-east1-d us-east1-a]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east1-b Master e2-standard-2 1 1 us-east1
nodes-us-east1-b Node n1-standard-2 4 4 us-east1
... skipping 14 lines ...
Pod kube-system/coredns-autoscaler-5d4dbc7b59-r8sg9 system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-r8sg9" is pending
Pod kube-system/coredns-dd657c749-6hqrr system-cluster-critical pod "coredns-dd657c749-6hqrr" is pending
Pod kube-system/metadata-proxy-v0.12-6jqhq system-node-critical pod "metadata-proxy-v0.12-6jqhq" is pending
Pod kube-system/metadata-proxy-v0.12-jr6xl system-node-critical pod "metadata-proxy-v0.12-jr6xl" is pending
Pod kube-system/metadata-proxy-v0.12-v5ngn system-node-critical pod "metadata-proxy-v0.12-v5ngn" is pending
Validation Failed
W0622 22:11:00.387808 6326 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 22:11:10.956127 6326 gce_cloud.go:295] Scanning zones: [us-east1-b us-east1-c us-east1-d us-east1-a]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east1-b Master e2-standard-2 1 1 us-east1
nodes-us-east1-b Node n1-standard-2 4 4 us-east1
... skipping 8 lines ...
VALIDATION ERRORS
KIND NAME MESSAGE
Pod kube-system/coredns-dd657c749-6hqrr system-cluster-critical pod "coredns-dd657c749-6hqrr" is pending
Pod kube-system/coredns-dd657c749-jvqks system-cluster-critical pod "coredns-dd657c749-jvqks" is pending
Validation Failed
W0622 22:11:11.803838 6326 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 22:11:22.179773 6326 gce_cloud.go:295] Scanning zones: [us-east1-b us-east1-c us-east1-d us-east1-a]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east1-b Master e2-standard-2 1 1 us-east1
nodes-us-east1-b Node n1-standard-2 4 4 us-east1
... skipping 57 lines ...
VALIDATION ERRORS
KIND NAME MESSAGE
Pod kube-system/kube-proxy-nodes-us-east1-b-t83b system-node-critical pod "kube-proxy-nodes-us-east1-b-t83b" is pending
Pod kube-system/kube-proxy-nodes-us-east1-b-vf6p system-node-critical pod "kube-proxy-nodes-us-east1-b-vf6p" is pending
Pod kube-system/kube-proxy-nodes-us-east1-b-vgn6 system-node-critical pod "kube-proxy-nodes-us-east1-b-vgn6" is pending
Validation Failed
W0622 22:11:56.520579 6326 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 22:12:06.849429 6326 gce_cloud.go:295] Scanning zones: [us-east1-b us-east1-c us-east1-d us-east1-a]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east1-b Master e2-standard-2 1 1 us-east1
nodes-us-east1-b Node n1-standard-2 4 4 us-east1
... skipping 183 lines ...
===================================
Random Seed: [1m1655936050[0m - Will randomize all specs
Will run [1m7042[0m specs
Running in parallel across [1m25[0m nodes
Jun 22 22:14:27.128: INFO: lookupDiskImageSources: gcloud error with [[]string{"instance-groups", "list-instances", "", "--format=get(instance)"}]; err:exit status 1
Jun 22 22:14:27.128: INFO: > ERROR: (gcloud.compute.instance-groups.list-instances) could not parse resource []
Jun 22 22:14:27.128: INFO: >
Jun 22 22:14:27.128: INFO: Cluster image sources lookup failed: exit status 1
Jun 22 22:14:27.128: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:14:27.130: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun 22 22:14:27.302: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun 22 22:14:27.434: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jun 22 22:14:27.434: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
... skipping 348 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: dir]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 479 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: block]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 38 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:14:28.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-2574" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":1,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:14:28.410: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 54 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:14:28.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "resourcequota-7434" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-auth] ServiceAccounts
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 59 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:14:28.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "netpol-1960" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":6,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:14:29.159: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:14:29.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "csistoragecapacity-1123" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:14:29.244: INFO: Only supported for providers [azure] (not gce)
... skipping 100 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl validation
[90mtest/e2e/kubectl/kubectl.go:1033[0m
should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
[90mtest/e2e/kubectl/kubectl.go:1078[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":1,"skipped":0,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:14:40.027: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 84 lines ...
[32m• [SLOW TEST:13.059 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:14:41.544: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 110 lines ...
[90mtest/e2e/common/node/runtime.go:43[0m
when running a container with a new image
[90mtest/e2e/common/node/runtime.go:259[0m
should be able to pull from private registry with secret [NodeConformance]
[90mtest/e2e/common/node/runtime.go:386[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:14:42.876: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
... skipping 111 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-0ebabbbc-1f3a-4bca-977e-1969818603b6
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 22:14:27.913: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87" in namespace "projected-4095" to be "Succeeded or Failed"
Jun 22 22:14:27.961: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Pending", Reason="", readiness=false. Elapsed: 48.555888ms
Jun 22 22:14:30.003: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089597465s
Jun 22 22:14:31.997: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083917023s
Jun 22 22:14:34.000: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086693578s
Jun 22 22:14:36.002: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088661122s
Jun 22 22:14:37.996: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082926722s
Jun 22 22:14:39.998: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Pending", Reason="", readiness=false. Elapsed: 12.0848902s
Jun 22 22:14:42.001: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Pending", Reason="", readiness=false. Elapsed: 14.087616154s
Jun 22 22:14:44.002: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Pending", Reason="", readiness=false. Elapsed: 16.089061577s
Jun 22 22:14:45.996: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Pending", Reason="", readiness=false. Elapsed: 18.083493486s
Jun 22 22:14:48.000: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Pending", Reason="", readiness=false. Elapsed: 20.08673133s
Jun 22 22:14:49.997: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.08365309s
[1mSTEP[0m: Saw pod success
Jun 22 22:14:49.997: INFO: Pod "pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87" satisfied condition "Succeeded or Failed"
Jun 22 22:14:50.033: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:14:50.373: INFO: Waiting for pod pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87 to disappear
Jun 22 22:14:50.408: INFO: Pod pod-projected-configmaps-9c174ff7-620b-4c40-899e-bd7497cfae87 no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:22.921 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Garbage collector
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 16 lines ...
[32m• [SLOW TEST:23.542 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should support cascading deletion of custom resources
[90mtest/e2e/apimachinery/garbage_collector.go:905[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":1,"skipped":3,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":2,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:14:30.288: INFO: >>> kubeConfig: /root/.kube/config
... skipping 3 lines ...
[It] should support existing single file [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:221
Jun 22 22:14:30.541: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 22 22:14:30.541: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-8vdt
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:14:30.579: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8vdt" in namespace "provisioning-1671" to be "Succeeded or Failed"
Jun 22 22:14:30.613: INFO: Pod "pod-subpath-test-inlinevolume-8vdt": Phase="Pending", Reason="", readiness=false. Elapsed: 33.920106ms
Jun 22 22:14:32.648: INFO: Pod "pod-subpath-test-inlinevolume-8vdt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06914108s
Jun 22 22:14:34.651: INFO: Pod "pod-subpath-test-inlinevolume-8vdt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071703158s
Jun 22 22:14:36.649: INFO: Pod "pod-subpath-test-inlinevolume-8vdt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069596263s
Jun 22 22:14:38.647: INFO: Pod "pod-subpath-test-inlinevolume-8vdt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068521846s
Jun 22 22:14:40.653: INFO: Pod "pod-subpath-test-inlinevolume-8vdt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073659439s
... skipping 4 lines ...
Jun 22 22:14:50.648: INFO: Pod "pod-subpath-test-inlinevolume-8vdt": Phase="Pending", Reason="", readiness=false. Elapsed: 20.06860381s
Jun 22 22:14:52.648: INFO: Pod "pod-subpath-test-inlinevolume-8vdt": Phase="Pending", Reason="", readiness=false. Elapsed: 22.068725606s
Jun 22 22:14:54.649: INFO: Pod "pod-subpath-test-inlinevolume-8vdt": Phase="Pending", Reason="", readiness=false. Elapsed: 24.069839086s
Jun 22 22:14:56.647: INFO: Pod "pod-subpath-test-inlinevolume-8vdt": Phase="Pending", Reason="", readiness=false. Elapsed: 26.067791072s
Jun 22 22:14:58.647: INFO: Pod "pod-subpath-test-inlinevolume-8vdt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.068323815s
[1mSTEP[0m: Saw pod success
Jun 22 22:14:58.647: INFO: Pod "pod-subpath-test-inlinevolume-8vdt" satisfied condition "Succeeded or Failed"
Jun 22 22:14:58.685: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-inlinevolume-8vdt container test-container-subpath-inlinevolume-8vdt: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:14:58.776: INFO: Waiting for pod pod-subpath-test-inlinevolume-8vdt to disappear
Jun 22 22:14:58.813: INFO: Pod pod-subpath-test-inlinevolume-8vdt no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-8vdt
Jun 22 22:14:58.813: INFO: Deleting pod "pod-subpath-test-inlinevolume-8vdt" in namespace "provisioning-1671"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":22,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:14:58.984: INFO: Only supported for providers [openstack] (not gce)
... skipping 22 lines ...
[1mSTEP[0m: Building a namespace api object, basename container-runtime
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: create the container
[1mSTEP[0m: wait for the container to reach Failed
[1mSTEP[0m: get the container status
[1mSTEP[0m: the container should be terminated
[1mSTEP[0m: the termination message should be set
Jun 22 22:15:00.727: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
[1mSTEP[0m: delete the container
[AfterEach] [sig-node] Container Runtime
... skipping 9 lines ...
[90mtest/e2e/common/node/runtime.go:43[0m
on terminated container
[90mtest/e2e/common/node/runtime.go:136[0m
should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:00.916: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 47 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
test/e2e/common/storage/host_path.go:39
[It] should support r/w [NodeConformance]
test/e2e/common/storage/host_path.go:67
[1mSTEP[0m: Creating a pod to test hostPath r/w
Jun 22 22:14:41.928: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9503" to be "Succeeded or Failed"
Jun 22 22:14:41.963: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 35.065292ms
Jun 22 22:14:44.000: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071896354s
Jun 22 22:14:45.999: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071025829s
Jun 22 22:14:48.000: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072417262s
Jun 22 22:14:49.998: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070328928s
Jun 22 22:14:52.003: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075060215s
Jun 22 22:14:54.000: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.072721944s
Jun 22 22:14:55.999: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.071223699s
Jun 22 22:14:57.999: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.070827416s
Jun 22 22:15:00.000: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.071960632s
Jun 22 22:15:02.000: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.072579319s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:02.000: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jun 22 22:15:02.036: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-host-path-test container test-container-2: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:02.117: INFO: Waiting for pod pod-host-path-test to disappear
Jun 22 22:15:02.153: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:20.594 seconds][0m
[sig-storage] HostPath
[90mtest/e2e/common/storage/framework.go:23[0m
should support r/w [NodeConformance]
[90mtest/e2e/common/storage/host_path.go:67[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":3,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:02.242: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
test/e2e/framework/framework.go:187
... skipping 69 lines ...
[32m• [SLOW TEST:35.410 seconds][0m
[sig-apps] DisruptionController
[90mtest/e2e/apps/framework.go:23[0m
should block an eviction until the PDB is updated to allow it [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:03.192: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 80 lines ...
[32m• [SLOW TEST:34.774 seconds][0m
[sig-apps] ReplicaSet
[90mtest/e2e/apps/framework.go:23[0m
should validate Replicaset Status endpoints [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:03.479: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 142 lines ...
[32m• [SLOW TEST:36.783 seconds][0m
[sig-apps] DisruptionController
[90mtest/e2e/apps/framework.go:23[0m
should observe PodDisruptionBudget status updated [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:04.555: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/framework/framework.go:187
... skipping 76 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:04.586: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 79 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:15:05.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "metrics-grabber-1085" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":2,"skipped":12,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-2b341cbf-db78-4dfd-b88d-08ca37d4a125
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 22:14:51.568: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425" in namespace "projected-9449" to be "Succeeded or Failed"
Jun 22 22:14:51.603: INFO: Pod "pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425": Phase="Pending", Reason="", readiness=false. Elapsed: 33.959719ms
Jun 22 22:14:53.638: INFO: Pod "pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069043085s
Jun 22 22:14:55.638: INFO: Pod "pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069198739s
Jun 22 22:14:57.637: INFO: Pod "pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068536439s
Jun 22 22:14:59.636: INFO: Pod "pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067592402s
Jun 22 22:15:01.637: INFO: Pod "pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06858495s
Jun 22 22:15:03.638: INFO: Pod "pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069415028s
Jun 22 22:15:05.642: INFO: Pod "pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.073569175s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:05.642: INFO: Pod "pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425" satisfied condition "Succeeded or Failed"
Jun 22 22:15:05.677: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:05.759: INFO: Waiting for pod pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425 to disappear
Jun 22 22:15:05.792: INFO: Pod pod-projected-configmaps-85b31ec0-ac85-466c-8b77-d9bd974dd425 no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:14.612 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 106 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and write from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:240[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":3,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:07.489: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 91 lines ...
[90mtest/e2e/kubectl/portforward.go:454[0m
that expects a client request
[90mtest/e2e/kubectl/portforward.go:455[0m
should support a client that connects, sends DATA, and disconnects
[90mtest/e2e/kubectl/portforward.go:459[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":1,"skipped":2,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:08.709: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 137 lines ...
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:14:27.730: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename job
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
test/e2e/apps/job.go:271
[1mSTEP[0m: Looking for a node to schedule job pod
[1mSTEP[0m: Creating a job
[1mSTEP[0m: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
test/e2e/framework/framework.go:187
Jun 22 22:15:10.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "job-2164" for this suite.
[32m• [SLOW TEST:42.535 seconds][0m
[sig-apps] Job
[90mtest/e2e/apps/framework.go:23[0m
should run a job to completion when tasks sometimes fail and are not locally restarted
[90mtest/e2e/apps/job.go:271[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ResourceQuota
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:15:03.504: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
[32m• [SLOW TEST:11.622 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
[90mtest/e2e/apimachinery/resource_quota.go:532[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":3,"skipped":18,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 9 lines ...
Jun 22 22:14:28.058: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-885 create -f -'
Jun 22 22:14:28.797: INFO: stderr: ""
Jun 22 22:14:28.797: INFO: stdout: "pod/httpd created\n"
Jun 22 22:14:28.797: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 22 22:14:28.797: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-885" to be "running and ready"
Jun 22 22:14:28.834: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 37.313083ms
Jun 22 22:14:28.834: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:30.883: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08600681s
Jun 22 22:14:30.883: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:32.869: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072034366s
Jun 22 22:14:32.869: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:34.870: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073172548s
Jun 22 22:14:34.870: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:36.867: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070619634s
Jun 22 22:14:36.867: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:38.872: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075654217s
Jun 22 22:14:38.873: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:40.868: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.071137033s
Jun 22 22:14:40.868: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:42.869: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.071985862s
Jun 22 22:14:42.869: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:44.868: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.071185878s
Jun 22 22:14:44.868: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:46.872: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.074884882s
Jun 22 22:14:46.872: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:48.869: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.072397125s
Jun 22 22:14:48.869: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:50.869: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.072547291s
Jun 22 22:14:50.869: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:52.869: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.072616544s
Jun 22 22:14:52.869: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:14:54.869: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.071845591s
Jun 22 22:14:54.869: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-east1-b-3xs4' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC }]
Jun 22 22:14:56.868: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 28.071419165s
Jun 22 22:14:56.868: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-east1-b-3xs4' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC }]
Jun 22 22:14:58.868: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 30.070993989s
Jun 22 22:14:58.868: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-east1-b-3xs4' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:14:28 +0000 UTC }]
Jun 22 22:15:00.871: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 32.074711458s
Jun 22 22:15:00.872: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 22 22:15:00.872: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should contain last line of the log
test/e2e/kubectl/kubectl.go:651
[1mSTEP[0m: executing a command with run
Jun 22 22:15:00.872: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-885 run run-log-test --image=registry.k8s.io/e2e-test-images/busybox:1.29-2 --restart=OnFailure --pod-running-timeout=2m0s -- sh -c sleep 10; seq 100 | while read i; do echo $i; sleep 0.01; done; echo EOF'
Jun 22 22:15:01.057: INFO: stderr: ""
Jun 22 22:15:01.057: INFO: stdout: "pod/run-log-test created\n"
Jun 22 22:15:01.057: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [run-log-test]
Jun 22 22:15:01.057: INFO: Waiting up to 5m0s for pod "run-log-test" in namespace "kubectl-885" to be "running and ready, or succeeded"
Jun 22 22:15:01.090: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 33.055527ms
Jun 22 22:15:01.090: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:15:03.130: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073629339s
Jun 22 22:15:03.130: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:15:05.125: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068167854s
Jun 22 22:15:05.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:15:07.130: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073042531s
Jun 22 22:15:07.130: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:15:09.125: INFO: Pod "run-log-test": Phase="Running", Reason="", readiness=true. Elapsed: 8.068398854s
Jun 22 22:15:09.125: INFO: Pod "run-log-test" satisfied condition "running and ready, or succeeded"
Jun 22 22:15:09.125: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [run-log-test]
Jun 22 22:15:09.125: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-885 logs -f run-log-test'
Jun 22 22:15:14.766: INFO: stderr: ""
Jun 22 22:15:14.767: INFO: stdout: "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\nEOF\n"
... skipping 20 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Simple pod
[90mtest/e2e/kubectl/kubectl.go:407[0m
should contain last line of the log
[90mtest/e2e/kubectl/kubectl.go:651[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":1,"skipped":12,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:15.522: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 88 lines ...
[32m• [SLOW TEST:51.338 seconds][0m
[sig-network] EndpointSlice
[90mtest/e2e/network/common/framework.go:23[0m
should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:19.071: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:15:19.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-5268" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":2,"skipped":19,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:19.714: INFO: Only supported for providers [vsphere] (not gce)
... skipping 23 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating projection with secret that has name projected-secret-test-map-c9e0514f-c3a2-4a7a-932b-b51abf608cd8
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 22:15:09.822: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-937b4bff-0273-49b8-84f3-8641b875f810" in namespace "projected-8549" to be "Succeeded or Failed"
Jun 22 22:15:09.857: INFO: Pod "pod-projected-secrets-937b4bff-0273-49b8-84f3-8641b875f810": Phase="Pending", Reason="", readiness=false. Elapsed: 34.787405ms
Jun 22 22:15:11.893: INFO: Pod "pod-projected-secrets-937b4bff-0273-49b8-84f3-8641b875f810": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071112773s
Jun 22 22:15:13.893: INFO: Pod "pod-projected-secrets-937b4bff-0273-49b8-84f3-8641b875f810": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070564855s
Jun 22 22:15:15.893: INFO: Pod "pod-projected-secrets-937b4bff-0273-49b8-84f3-8641b875f810": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070693763s
Jun 22 22:15:17.893: INFO: Pod "pod-projected-secrets-937b4bff-0273-49b8-84f3-8641b875f810": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070907352s
Jun 22 22:15:19.894: INFO: Pod "pod-projected-secrets-937b4bff-0273-49b8-84f3-8641b875f810": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071739801s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:19.894: INFO: Pod "pod-projected-secrets-937b4bff-0273-49b8-84f3-8641b875f810" satisfied condition "Succeeded or Failed"
Jun 22 22:15:19.933: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-projected-secrets-937b4bff-0273-49b8-84f3-8641b875f810 container projected-secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:20.019: INFO: Waiting for pod pod-projected-secrets-937b4bff-0273-49b8-84f3-8641b875f810 to disappear
Jun 22 22:15:20.053: INFO: Pod pod-projected-secrets-937b4bff-0273-49b8-84f3-8641b875f810 no longer exists
[AfterEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.658 seconds][0m
[sig-storage] Projected secret
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:20.172: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 33 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:15:20.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "health-25" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":3,"skipped":22,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:20.945: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 14 lines ...
[36mDriver hostPathSymlink doesn't support PreprovisionedPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:15:10.278: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename projected
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-1f477aa2-e417-4a5d-865b-51292248917b
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 22:15:10.645: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0" in namespace "projected-8100" to be "Succeeded or Failed"
Jun 22 22:15:10.694: INFO: Pod "pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0": Phase="Pending", Reason="", readiness=false. Elapsed: 49.27282ms
Jun 22 22:15:12.729: INFO: Pod "pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084312031s
Jun 22 22:15:14.729: INFO: Pod "pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084066732s
Jun 22 22:15:16.730: INFO: Pod "pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084463745s
Jun 22 22:15:18.730: INFO: Pod "pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085328847s
Jun 22 22:15:20.732: INFO: Pod "pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.086532237s
Jun 22 22:15:22.729: INFO: Pod "pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.084436589s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:22.730: INFO: Pod "pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0" satisfied condition "Succeeded or Failed"
Jun 22 22:15:22.765: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0 container projected-configmap-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:22.860: INFO: Waiting for pod pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0 to disappear
Jun 22 22:15:22.896: INFO: Pod pod-projected-configmaps-364ed167-78c3-4b1a-ba51-efe49fffabe0 no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:12.715 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:23.011: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: vsphere]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [vsphere] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1439
[90m------------------------------[0m
... skipping 10 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: dir-link-bindmounted]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 164 lines ...
Jun 22 22:15:10.009: INFO: PersistentVolumeClaim pvc-ksngv found but phase is Pending instead of Bound.
Jun 22 22:15:12.044: INFO: PersistentVolumeClaim pvc-ksngv found and phase=Bound (12.243875421s)
Jun 22 22:15:12.044: INFO: Waiting up to 3m0s for PersistentVolume local-klvsz to have phase Bound
Jun 22 22:15:12.078: INFO: PersistentVolume local-klvsz found and phase=Bound (33.54106ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-gx4h
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:15:12.182: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gx4h" in namespace "provisioning-7416" to be "Succeeded or Failed"
Jun 22 22:15:12.220: INFO: Pod "pod-subpath-test-preprovisionedpv-gx4h": Phase="Pending", Reason="", readiness=false. Elapsed: 37.341488ms
Jun 22 22:15:14.257: INFO: Pod "pod-subpath-test-preprovisionedpv-gx4h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074488968s
Jun 22 22:15:16.256: INFO: Pod "pod-subpath-test-preprovisionedpv-gx4h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073257848s
Jun 22 22:15:18.255: INFO: Pod "pod-subpath-test-preprovisionedpv-gx4h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072316873s
Jun 22 22:15:20.258: INFO: Pod "pod-subpath-test-preprovisionedpv-gx4h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074955312s
Jun 22 22:15:22.255: INFO: Pod "pod-subpath-test-preprovisionedpv-gx4h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072836999s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:22.256: INFO: Pod "pod-subpath-test-preprovisionedpv-gx4h" satisfied condition "Succeeded or Failed"
Jun 22 22:15:22.291: INFO: Trying to get logs from node nodes-us-east1-b-vf6p pod pod-subpath-test-preprovisionedpv-gx4h container test-container-subpath-preprovisionedpv-gx4h: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:22.367: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gx4h to disappear
Jun 22 22:15:22.401: INFO: Pod pod-subpath-test-preprovisionedpv-gx4h no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-gx4h
Jun 22 22:15:22.401: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gx4h" in namespace "provisioning-7416"
... skipping 54 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:15:25.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "disruption-2312" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":4,"skipped":28,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:25.581: INFO: Only supported for providers [aws] (not gce)
... skipping 14 lines ...
[36mOnly supported for providers [aws] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1722
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:15:23.936: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename custom-resource-definition
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:15:26.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "custom-resource-definition-216" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:26.217: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 122 lines ...
[32m• [SLOW TEST:59.260 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
[90mtest/e2e/network/service.go:933[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:27.002: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/framework/framework.go:187
... skipping 46 lines ...
Jun 22 22:15:09.811: INFO: PersistentVolumeClaim pvc-znj5k found but phase is Pending instead of Bound.
Jun 22 22:15:11.846: INFO: PersistentVolumeClaim pvc-znj5k found and phase=Bound (4.106201976s)
Jun 22 22:15:11.846: INFO: Waiting up to 3m0s for PersistentVolume local-qcn96 to have phase Bound
Jun 22 22:15:11.880: INFO: PersistentVolume local-qcn96 found and phase=Bound (33.838243ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-lbrv
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:15:11.994: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lbrv" in namespace "provisioning-1153" to be "Succeeded or Failed"
Jun 22 22:15:12.029: INFO: Pod "pod-subpath-test-preprovisionedpv-lbrv": Phase="Pending", Reason="", readiness=false. Elapsed: 35.36347ms
Jun 22 22:15:14.070: INFO: Pod "pod-subpath-test-preprovisionedpv-lbrv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076047405s
Jun 22 22:15:16.067: INFO: Pod "pod-subpath-test-preprovisionedpv-lbrv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073020836s
Jun 22 22:15:18.076: INFO: Pod "pod-subpath-test-preprovisionedpv-lbrv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082028908s
Jun 22 22:15:20.071: INFO: Pod "pod-subpath-test-preprovisionedpv-lbrv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07740831s
Jun 22 22:15:22.071: INFO: Pod "pod-subpath-test-preprovisionedpv-lbrv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.077684027s
Jun 22 22:15:24.066: INFO: Pod "pod-subpath-test-preprovisionedpv-lbrv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.072488668s
Jun 22 22:15:26.066: INFO: Pod "pod-subpath-test-preprovisionedpv-lbrv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.071896037s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:26.066: INFO: Pod "pod-subpath-test-preprovisionedpv-lbrv" satisfied condition "Succeeded or Failed"
Jun 22 22:15:26.100: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-subpath-test-preprovisionedpv-lbrv container test-container-subpath-preprovisionedpv-lbrv: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:26.193: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lbrv to disappear
Jun 22 22:15:26.227: INFO: Pod pod-subpath-test-preprovisionedpv-lbrv no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-lbrv
Jun 22 22:15:26.227: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lbrv" in namespace "provisioning-1153"
... skipping 221 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:29.452: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 96 lines ...
Jun 22 22:15:09.771: INFO: PersistentVolumeClaim pvc-fwhkq found but phase is Pending instead of Bound.
Jun 22 22:15:11.807: INFO: PersistentVolumeClaim pvc-fwhkq found and phase=Bound (10.224546208s)
Jun 22 22:15:11.807: INFO: Waiting up to 3m0s for PersistentVolume local-hf8zv to have phase Bound
Jun 22 22:15:11.842: INFO: PersistentVolume local-hf8zv found and phase=Bound (34.572569ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-5jxt
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:15:11.950: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5jxt" in namespace "provisioning-3648" to be "Succeeded or Failed"
Jun 22 22:15:11.987: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt": Phase="Pending", Reason="", readiness=false. Elapsed: 36.968961ms
Jun 22 22:15:14.023: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073281777s
Jun 22 22:15:16.025: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074588027s
Jun 22 22:15:18.023: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072965595s
Jun 22 22:15:20.026: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076069305s
Jun 22 22:15:22.023: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072869598s
Jun 22 22:15:24.024: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.074217152s
Jun 22 22:15:26.023: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.072517s
Jun 22 22:15:28.022: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.072086468s
Jun 22 22:15:30.024: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt": Phase="Pending", Reason="", readiness=false. Elapsed: 18.074218835s
Jun 22 22:15:32.024: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.073791037s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:32.024: INFO: Pod "pod-subpath-test-preprovisionedpv-5jxt" satisfied condition "Succeeded or Failed"
Jun 22 22:15:32.062: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-preprovisionedpv-5jxt container test-container-volume-preprovisionedpv-5jxt: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:32.146: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5jxt to disappear
Jun 22 22:15:32.181: INFO: Pod pod-subpath-test-preprovisionedpv-5jxt no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-5jxt
Jun 22 22:15:32.181: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5jxt" in namespace "provisioning-3648"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":4,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:32.844: INFO: Only supported for providers [azure] (not gce)
... skipping 48 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating secret with name secret-test-map-11de9ea1-cc98-41a5-babc-9ac647cc23fa
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 22:15:15.476: INFO: Waiting up to 5m0s for pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951" in namespace "secrets-2482" to be "Succeeded or Failed"
Jun 22 22:15:15.510: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951": Phase="Pending", Reason="", readiness=false. Elapsed: 34.335888ms
Jun 22 22:15:17.547: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071280056s
Jun 22 22:15:19.551: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075539525s
Jun 22 22:15:21.545: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069734878s
Jun 22 22:15:23.546: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069909554s
Jun 22 22:15:25.549: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072842077s
Jun 22 22:15:27.547: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951": Phase="Pending", Reason="", readiness=false. Elapsed: 12.070823409s
Jun 22 22:15:29.545: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951": Phase="Pending", Reason="", readiness=false. Elapsed: 14.069407054s
Jun 22 22:15:31.545: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951": Phase="Pending", Reason="", readiness=false. Elapsed: 16.0690065s
Jun 22 22:15:33.549: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951": Phase="Pending", Reason="", readiness=false. Elapsed: 18.073581873s
Jun 22 22:15:35.545: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.06917182s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:35.545: INFO: Pod "pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951" satisfied condition "Succeeded or Failed"
Jun 22 22:15:35.579: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951 container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:35.657: INFO: Waiting for pod pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951 to disappear
Jun 22 22:15:35.694: INFO: Pod pod-secrets-9318e678-1b87-4a7a-8b31-af9fc6fdd951 no longer exists
[AfterEach] [sig-storage] Secrets
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:20.628 seconds][0m
[sig-storage] Secrets
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:35.813: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 14 lines ...
[36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:15:27.122: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
[32m• [SLOW TEST:8.811 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
pod should support memory backed volumes of specified size
[90mtest/e2e/common/storage/empty_dir.go:298[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":4,"skipped":21,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:35.970: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 29 lines ...
[It] should support readOnly directory specified in the volumeMount
test/e2e/storage/testsuites/subpath.go:367
Jun 22 22:15:15.823: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 22:15:15.863: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-mp6q
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:15:15.901: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-mp6q" in namespace "provisioning-390" to be "Succeeded or Failed"
Jun 22 22:15:15.945: INFO: Pod "pod-subpath-test-inlinevolume-mp6q": Phase="Pending", Reason="", readiness=false. Elapsed: 43.352557ms
Jun 22 22:15:17.980: INFO: Pod "pod-subpath-test-inlinevolume-mp6q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078515576s
Jun 22 22:15:19.980: INFO: Pod "pod-subpath-test-inlinevolume-mp6q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07850431s
Jun 22 22:15:21.979: INFO: Pod "pod-subpath-test-inlinevolume-mp6q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078077912s
Jun 22 22:15:23.980: INFO: Pod "pod-subpath-test-inlinevolume-mp6q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079278016s
Jun 22 22:15:25.980: INFO: Pod "pod-subpath-test-inlinevolume-mp6q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078317217s
Jun 22 22:15:27.979: INFO: Pod "pod-subpath-test-inlinevolume-mp6q": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077873403s
Jun 22 22:15:29.982: INFO: Pod "pod-subpath-test-inlinevolume-mp6q": Phase="Pending", Reason="", readiness=false. Elapsed: 14.080398455s
Jun 22 22:15:31.979: INFO: Pod "pod-subpath-test-inlinevolume-mp6q": Phase="Pending", Reason="", readiness=false. Elapsed: 16.077911925s
Jun 22 22:15:33.980: INFO: Pod "pod-subpath-test-inlinevolume-mp6q": Phase="Pending", Reason="", readiness=false. Elapsed: 18.07840691s
Jun 22 22:15:35.979: INFO: Pod "pod-subpath-test-inlinevolume-mp6q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.077545152s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:35.979: INFO: Pod "pod-subpath-test-inlinevolume-mp6q" satisfied condition "Succeeded or Failed"
Jun 22 22:15:36.021: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-subpath-test-inlinevolume-mp6q container test-container-subpath-inlinevolume-mp6q: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:36.108: INFO: Waiting for pod pod-subpath-test-inlinevolume-mp6q to disappear
Jun 22 22:15:36.141: INFO: Pod pod-subpath-test-inlinevolume-mp6q no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-mp6q
Jun 22 22:15:36.141: INFO: Deleting pod "pod-subpath-test-inlinevolume-mp6q" in namespace "provisioning-390"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90mtest/e2e/storage/testsuites/subpath.go:367[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":24,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] CSI mock volume
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 58 lines ...
Jun 22 22:15:10.777: INFO: Pod "pvc-volume-tester-hqqbq": Phase="Running", Reason="", readiness=true. Elapsed: 12.089641013s
Jun 22 22:15:10.777: INFO: Pod "pvc-volume-tester-hqqbq" satisfied condition "running"
[1mSTEP[0m: Deleting the previously created pod
Jun 22 22:15:10.777: INFO: Deleting pod "pvc-volume-tester-hqqbq" in namespace "csi-mock-volumes-7122"
Jun 22 22:15:10.813: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hqqbq" to be fully deleted
[1mSTEP[0m: Checking CSI driver logs
Jun 22 22:15:16.925: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"bf86d5c3-f278-11ec-8d06-4298abc3e82f","target_path":"/var/lib/kubelet/pods/b7f06d24-a127-49db-a636-e6fd8867758f/volumes/kubernetes.io~csi/pvc-bbc151c3-919e-4634-8a6d-8cbfb1192bd0/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
[1mSTEP[0m: Deleting pod pvc-volume-tester-hqqbq
Jun 22 22:15:16.926: INFO: Deleting pod "pvc-volume-tester-hqqbq" in namespace "csi-mock-volumes-7122"
[1mSTEP[0m: Deleting claim pvc-7m9xh
Jun 22 22:15:17.028: INFO: Waiting up to 2m0s for PersistentVolume pvc-bbc151c3-919e-4634-8a6d-8cbfb1192bd0 to get deleted
Jun 22 22:15:17.064: INFO: PersistentVolume pvc-bbc151c3-919e-4634-8a6d-8cbfb1192bd0 found and phase=Released (35.64435ms)
Jun 22 22:15:19.099: INFO: PersistentVolume pvc-bbc151c3-919e-4634-8a6d-8cbfb1192bd0 was removed
... skipping 45 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI workload information using mock driver
[90mtest/e2e/storage/csi_mock_volume.go:467[0m
should not be passed when podInfoOnMount=nil
[90mtest/e2e/storage/csi_mock_volume.go:517[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:14:28.733: INFO: >>> kubeConfig: /root/.kube/config
... skipping 38 lines ...
Jun 22 22:15:09.689: INFO: PersistentVolumeClaim pvc-7nhcp found but phase is Pending instead of Bound.
Jun 22 22:15:11.725: INFO: PersistentVolumeClaim pvc-7nhcp found and phase=Bound (6.145253862s)
Jun 22 22:15:11.726: INFO: Waiting up to 3m0s for PersistentVolume local-6jd7b to have phase Bound
Jun 22 22:15:11.759: INFO: PersistentVolume local-6jd7b found and phase=Bound (33.835397ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-pw2p
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:15:11.866: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pw2p" in namespace "provisioning-2105" to be "Succeeded or Failed"
Jun 22 22:15:11.901: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p": Phase="Pending", Reason="", readiness=false. Elapsed: 34.327293ms
Jun 22 22:15:13.935: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069209577s
Jun 22 22:15:15.948: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08147356s
Jun 22 22:15:17.936: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069444222s
Jun 22 22:15:19.939: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072312367s
Jun 22 22:15:21.940: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073881369s
... skipping 3 lines ...
Jun 22 22:15:29.939: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p": Phase="Pending", Reason="", readiness=false. Elapsed: 18.072282532s
Jun 22 22:15:31.938: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p": Phase="Pending", Reason="", readiness=false. Elapsed: 20.071989843s
Jun 22 22:15:33.937: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p": Phase="Pending", Reason="", readiness=false. Elapsed: 22.070954028s
Jun 22 22:15:35.938: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p": Phase="Pending", Reason="", readiness=false. Elapsed: 24.071588603s
Jun 22 22:15:37.938: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.072031889s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:37.938: INFO: Pod "pod-subpath-test-preprovisionedpv-pw2p" satisfied condition "Succeeded or Failed"
Jun 22 22:15:37.974: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-preprovisionedpv-pw2p container test-container-subpath-preprovisionedpv-pw2p: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:38.061: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pw2p to disappear
Jun 22 22:15:38.099: INFO: Pod pod-subpath-test-preprovisionedpv-pw2p no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-pw2p
Jun 22 22:15:38.099: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pw2p" in namespace "provisioning-2105"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:38.658: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 175 lines ...
[90mtest/e2e/node/framework.go:23[0m
Clean up pods on node
[90mtest/e2e/node/kubelet.go:281[0m
kubelet should be able to delete 10 pods per node in 1m0s.
[90mtest/e2e/node/kubelet.go:343[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":2,"skipped":14,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] Deployment
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 115 lines ...
[It] should support file as subpath [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:232
Jun 22 22:15:07.784: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 22:15:07.833: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-m79c
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 22:15:07.872: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-m79c" in namespace "provisioning-1367" to be "Succeeded or Failed"
Jun 22 22:15:07.907: INFO: Pod "pod-subpath-test-inlinevolume-m79c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.686488ms
Jun 22 22:15:09.953: INFO: Pod "pod-subpath-test-inlinevolume-m79c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081060605s
Jun 22 22:15:11.949: INFO: Pod "pod-subpath-test-inlinevolume-m79c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077131416s
Jun 22 22:15:13.943: INFO: Pod "pod-subpath-test-inlinevolume-m79c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071128499s
Jun 22 22:15:15.945: INFO: Pod "pod-subpath-test-inlinevolume-m79c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072967165s
Jun 22 22:15:17.946: INFO: Pod "pod-subpath-test-inlinevolume-m79c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073755999s
... skipping 7 lines ...
Jun 22 22:15:33.950: INFO: Pod "pod-subpath-test-inlinevolume-m79c": Phase="Running", Reason="", readiness=true. Elapsed: 26.077963388s
Jun 22 22:15:35.945: INFO: Pod "pod-subpath-test-inlinevolume-m79c": Phase="Running", Reason="", readiness=true. Elapsed: 28.072902093s
Jun 22 22:15:37.943: INFO: Pod "pod-subpath-test-inlinevolume-m79c": Phase="Running", Reason="", readiness=true. Elapsed: 30.070633373s
Jun 22 22:15:39.945: INFO: Pod "pod-subpath-test-inlinevolume-m79c": Phase="Running", Reason="", readiness=true. Elapsed: 32.072658368s
Jun 22 22:15:41.943: INFO: Pod "pod-subpath-test-inlinevolume-m79c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.070534751s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:41.943: INFO: Pod "pod-subpath-test-inlinevolume-m79c" satisfied condition "Succeeded or Failed"
Jun 22 22:15:41.977: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-inlinevolume-m79c container test-container-subpath-inlinevolume-m79c: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:42.067: INFO: Waiting for pod pod-subpath-test-inlinevolume-m79c to disappear
Jun 22 22:15:42.103: INFO: Pod pod-subpath-test-inlinevolume-m79c no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-m79c
Jun 22 22:15:42.103: INFO: Deleting pod "pod-subpath-test-inlinevolume-m79c" in namespace "provisioning-1367"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":15,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Events
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 63 lines ...
[32m• [SLOW TEST:16.866 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should verify ResourceQuota with best effort scope. [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":3,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:46.387: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 87 lines ...
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:15:46.467: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename configmap
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap that has name configmap-test-emptyKey-7865a5c5-6669-44df-b943-c6efa848280e
[AfterEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:187
Jun 22 22:15:46.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "configmap-24" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":4,"skipped":39,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:46.839: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 130 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:15:48.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "discovery-9339" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":5,"skipped":44,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Probing container
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 41 lines ...
[32m• [SLOW TEST:22.471 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:48.751: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 9 lines ...
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[36mDriver csi-hostpath doesn't support InlineVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running ","total":-1,"completed":3,"skipped":28,"failed":0}
[BeforeEach] [sig-node] InitContainer [NodeConformance]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:15:44.942: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename init-container
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
[32m• [SLOW TEST:5.344 seconds][0m
[sig-node] InitContainer [NodeConformance]
[90mtest/e2e/common/node/framework.go:23[0m
should invoke init containers on a RestartAlways pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:50.311: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
Jun 22 22:15:36.110: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-511fcffa-c1db-4338-bd7f-5f2b859e9b00" in namespace "security-context-test-4909" to be "Succeeded or Failed"
Jun 22 22:15:36.144: INFO: Pod "alpine-nnp-false-511fcffa-c1db-4338-bd7f-5f2b859e9b00": Phase="Pending", Reason="", readiness=false. Elapsed: 34.032787ms
Jun 22 22:15:38.180: INFO: Pod "alpine-nnp-false-511fcffa-c1db-4338-bd7f-5f2b859e9b00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070081028s
Jun 22 22:15:40.180: INFO: Pod "alpine-nnp-false-511fcffa-c1db-4338-bd7f-5f2b859e9b00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069328943s
Jun 22 22:15:42.179: INFO: Pod "alpine-nnp-false-511fcffa-c1db-4338-bd7f-5f2b859e9b00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068906768s
Jun 22 22:15:44.184: INFO: Pod "alpine-nnp-false-511fcffa-c1db-4338-bd7f-5f2b859e9b00": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073391967s
Jun 22 22:15:46.179: INFO: Pod "alpine-nnp-false-511fcffa-c1db-4338-bd7f-5f2b859e9b00": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068235621s
Jun 22 22:15:48.180: INFO: Pod "alpine-nnp-false-511fcffa-c1db-4338-bd7f-5f2b859e9b00": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069594979s
Jun 22 22:15:50.181: INFO: Pod "alpine-nnp-false-511fcffa-c1db-4338-bd7f-5f2b859e9b00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.07040045s
Jun 22 22:15:50.181: INFO: Pod "alpine-nnp-false-511fcffa-c1db-4338-bd7f-5f2b859e9b00" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 22 22:15:50.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-4909" for this suite.
... skipping 4 lines ...
[90mtest/e2e/common/node/security_context.go:298[0m
should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:50.338: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/framework/framework.go:187
... skipping 32 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:15:39.979: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium
Jun 22 22:15:40.347: INFO: Waiting up to 5m0s for pod "pod-f1218331-8230-4dcc-bbb4-177af23fb792" in namespace "emptydir-114" to be "Succeeded or Failed"
Jun 22 22:15:40.381: INFO: Pod "pod-f1218331-8230-4dcc-bbb4-177af23fb792": Phase="Pending", Reason="", readiness=false. Elapsed: 33.982973ms
Jun 22 22:15:42.416: INFO: Pod "pod-f1218331-8230-4dcc-bbb4-177af23fb792": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06903592s
Jun 22 22:15:44.421: INFO: Pod "pod-f1218331-8230-4dcc-bbb4-177af23fb792": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073796712s
Jun 22 22:15:46.417: INFO: Pod "pod-f1218331-8230-4dcc-bbb4-177af23fb792": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069659298s
Jun 22 22:15:48.433: INFO: Pod "pod-f1218331-8230-4dcc-bbb4-177af23fb792": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085936214s
Jun 22 22:15:50.420: INFO: Pod "pod-f1218331-8230-4dcc-bbb4-177af23fb792": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073097359s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:50.420: INFO: Pod "pod-f1218331-8230-4dcc-bbb4-177af23fb792" satisfied condition "Succeeded or Failed"
Jun 22 22:15:50.454: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-f1218331-8230-4dcc-bbb4-177af23fb792 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:50.540: INFO: Waiting for pod pod-f1218331-8230-4dcc-bbb4-177af23fb792 to disappear
Jun 22 22:15:50.574: INFO: Pod pod-f1218331-8230-4dcc-bbb4-177af23fb792 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.682 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:15:35.984: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs
Jun 22 22:15:36.269: INFO: Waiting up to 5m0s for pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757" in namespace "emptydir-4987" to be "Succeeded or Failed"
Jun 22 22:15:36.304: INFO: Pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757": Phase="Pending", Reason="", readiness=false. Elapsed: 34.742732ms
Jun 22 22:15:38.340: INFO: Pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070892615s
Jun 22 22:15:40.344: INFO: Pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075237063s
Jun 22 22:15:42.340: INFO: Pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071547444s
Jun 22 22:15:44.341: INFO: Pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071910105s
Jun 22 22:15:46.341: INFO: Pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072298081s
Jun 22 22:15:48.338: INFO: Pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069617438s
Jun 22 22:15:50.339: INFO: Pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757": Phase="Pending", Reason="", readiness=false. Elapsed: 14.070208586s
Jun 22 22:15:52.340: INFO: Pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757": Phase="Pending", Reason="", readiness=false. Elapsed: 16.071389396s
Jun 22 22:15:54.341: INFO: Pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.07195302s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:54.341: INFO: Pod "pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757" satisfied condition "Succeeded or Failed"
Jun 22 22:15:54.376: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:54.462: INFO: Waiting for pod pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757 to disappear
Jun 22 22:15:54.497: INFO: Pod pod-40ba37d4-71b7-45ec-aba7-1bacf6aff757 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:18.594 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":28,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:54.606: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 149 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when scheduling a busybox Pod with hostAliases
[90mtest/e2e/common/node/kubelet.go:139[0m
should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:55.226: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 192 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
[90mtest/e2e/storage/framework/testsuite.go:50[0m
Verify if offline PVC expansion works
[90mtest/e2e/storage/testsuites/volume_expand.go:176[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":27,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] HostPort
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 66 lines ...
[32m• [SLOW TEST:24.697 seconds][0m
[sig-network] HostPort
[90mtest/e2e/network/common/framework.go:23[0m
validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:57.595: INFO: Only supported for providers [aws] (not gce)
... skipping 156 lines ...
[32m• [SLOW TEST:91.852 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
[90mtest/e2e/network/service.go:1803[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:59.594: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/framework/framework.go:187
... skipping 2 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: azure-disk]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1577
[90m------------------------------[0m
... skipping 186 lines ...
[1mSTEP[0m: Building a namespace api object, basename var-expansion
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test env composition
Jun 22 22:15:55.536: INFO: Waiting up to 5m0s for pod "var-expansion-a6af19a5-8779-4fe8-a61f-44141afea222" in namespace "var-expansion-1391" to be "Succeeded or Failed"
Jun 22 22:15:55.577: INFO: Pod "var-expansion-a6af19a5-8779-4fe8-a61f-44141afea222": Phase="Pending", Reason="", readiness=false. Elapsed: 41.057723ms
Jun 22 22:15:57.612: INFO: Pod "var-expansion-a6af19a5-8779-4fe8-a61f-44141afea222": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076630665s
Jun 22 22:15:59.620: INFO: Pod "var-expansion-a6af19a5-8779-4fe8-a61f-44141afea222": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084677389s
[1mSTEP[0m: Saw pod success
Jun 22 22:15:59.620: INFO: Pod "var-expansion-a6af19a5-8779-4fe8-a61f-44141afea222" satisfied condition "Succeeded or Failed"
Jun 22 22:15:59.655: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod var-expansion-a6af19a5-8779-4fe8-a61f-44141afea222 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:15:59.733: INFO: Waiting for pod var-expansion-a6af19a5-8779-4fe8-a61f-44141afea222 to disappear
Jun 22 22:15:59.768: INFO: Pod var-expansion-a6af19a5-8779-4fe8-a61f-44141afea222 no longer exists
[AfterEach] [sig-node] Variable Expansion
test/e2e/framework/framework.go:187
Jun 22 22:15:59.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "var-expansion-1391" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:15:59.873: INFO: Only supported for providers [vsphere] (not gce)
... skipping 62 lines ...
[32m• [SLOW TEST:58.492 seconds][0m
[sig-apps] CronJob
[90mtest/e2e/apps/framework.go:23[0m
should be able to schedule after more than 100 missed schedule
[90mtest/e2e/apps/cronjob.go:191[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":4,"skipped":54,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:15:50.351: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium
Jun 22 22:15:50.630: INFO: Waiting up to 5m0s for pod "pod-c9803e82-63da-4dae-a4fa-eca09a19be18" in namespace "emptydir-1426" to be "Succeeded or Failed"
Jun 22 22:15:50.664: INFO: Pod "pod-c9803e82-63da-4dae-a4fa-eca09a19be18": Phase="Pending", Reason="", readiness=false. Elapsed: 34.027345ms
Jun 22 22:15:52.705: INFO: Pod "pod-c9803e82-63da-4dae-a4fa-eca09a19be18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074493055s
Jun 22 22:15:54.698: INFO: Pod "pod-c9803e82-63da-4dae-a4fa-eca09a19be18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06798981s
Jun 22 22:15:56.698: INFO: Pod "pod-c9803e82-63da-4dae-a4fa-eca09a19be18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068219274s
Jun 22 22:15:58.712: INFO: Pod "pod-c9803e82-63da-4dae-a4fa-eca09a19be18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08186293s
Jun 22 22:16:00.716: INFO: Pod "pod-c9803e82-63da-4dae-a4fa-eca09a19be18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085469172s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:00.716: INFO: Pod "pod-c9803e82-63da-4dae-a4fa-eca09a19be18" satisfied condition "Succeeded or Failed"
Jun 22 22:16:00.752: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-c9803e82-63da-4dae-a4fa-eca09a19be18 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:00.842: INFO: Waiting for pod pod-c9803e82-63da-4dae-a4fa-eca09a19be18 to disappear
Jun 22 22:16:00.876: INFO: Pod pod-c9803e82-63da-4dae-a4fa-eca09a19be18 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.602 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:00.971: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 34 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:16:00.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "discovery-3521" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":2,"skipped":39,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:01.101: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 56 lines ...
Jun 22 22:15:49.931: INFO: Pod "pod-should-be-evicted2d38bc31-3674-4521-8c3d-ee78269a33e9": Phase="Running", Reason="", readiness=true. Elapsed: 44.075729894s
Jun 22 22:15:51.939: INFO: Pod "pod-should-be-evicted2d38bc31-3674-4521-8c3d-ee78269a33e9": Phase="Running", Reason="", readiness=true. Elapsed: 46.084164392s
Jun 22 22:15:53.930: INFO: Pod "pod-should-be-evicted2d38bc31-3674-4521-8c3d-ee78269a33e9": Phase="Running", Reason="", readiness=true. Elapsed: 48.07449445s
Jun 22 22:15:55.931: INFO: Pod "pod-should-be-evicted2d38bc31-3674-4521-8c3d-ee78269a33e9": Phase="Running", Reason="", readiness=true. Elapsed: 50.076341739s
Jun 22 22:15:57.930: INFO: Pod "pod-should-be-evicted2d38bc31-3674-4521-8c3d-ee78269a33e9": Phase="Running", Reason="", readiness=true. Elapsed: 52.074442189s
Jun 22 22:15:59.929: INFO: Pod "pod-should-be-evicted2d38bc31-3674-4521-8c3d-ee78269a33e9": Phase="Running", Reason="", readiness=true. Elapsed: 54.074032325s
Jun 22 22:16:01.930: INFO: Pod "pod-should-be-evicted2d38bc31-3674-4521-8c3d-ee78269a33e9": Phase="Failed", Reason="Evicted", readiness=false. Elapsed: 56.074434674s
Jun 22 22:16:01.930: INFO: Pod "pod-should-be-evicted2d38bc31-3674-4521-8c3d-ee78269a33e9" satisfied condition "terminated with reason Evicted"
[1mSTEP[0m: deleting the pod
[AfterEach] [sig-node] Pods Extended
test/e2e/framework/framework.go:187
Jun 22 22:16:01.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "pods-8950" for this suite.
... skipping 4 lines ...
[90mtest/e2e/node/framework.go:23[0m
Pod Container lifecycle
[90mtest/e2e/node/pods.go:226[0m
evicted pods should be terminal
[90mtest/e2e/node/pods.go:302[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal","total":-1,"completed":3,"skipped":21,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 53 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl expose
[90mtest/e2e/kubectl/kubectl.go:1398[0m
should create services for rc [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Networking
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 102 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:16:02.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "watch-6912" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:02.685: INFO: Only supported for providers [vsphere] (not gce)
... skipping 182 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":10,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:06.788: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 68 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir volume type on tmpfs
Jun 22 22:16:01.075: INFO: Waiting up to 5m0s for pod "pod-d39083d7-1ee2-4f65-aeec-22d7d9ce8ea3" in namespace "emptydir-8532" to be "Succeeded or Failed"
Jun 22 22:16:01.111: INFO: Pod "pod-d39083d7-1ee2-4f65-aeec-22d7d9ce8ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 35.986123ms
Jun 22 22:16:03.151: INFO: Pod "pod-d39083d7-1ee2-4f65-aeec-22d7d9ce8ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075329551s
Jun 22 22:16:05.154: INFO: Pod "pod-d39083d7-1ee2-4f65-aeec-22d7d9ce8ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078753142s
Jun 22 22:16:07.151: INFO: Pod "pod-d39083d7-1ee2-4f65-aeec-22d7d9ce8ea3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075742452s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:07.151: INFO: Pod "pod-d39083d7-1ee2-4f65-aeec-22d7d9ce8ea3" satisfied condition "Succeeded or Failed"
Jun 22 22:16:07.192: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-d39083d7-1ee2-4f65-aeec-22d7d9ce8ea3 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:07.283: INFO: Waiting for pod pod-d39083d7-1ee2-4f65-aeec-22d7d9ce8ea3 to disappear
Jun 22 22:16:07.323: INFO: Pod pod-d39083d7-1ee2-4f65-aeec-22d7d9ce8ea3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 73 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":6,"skipped":34,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:08.574: INFO: Only supported for providers [openstack] (not gce)
... skipping 347 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (filesystem volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":3,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0}
[BeforeEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:16:00.592: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename projected
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating projection with secret that has name projected-secret-test-28407271-8b97-4eba-bb32-1d8ba82a392e
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 22:16:00.934: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-441188a3-fd02-45df-8bec-c6bdf8bc68c2" in namespace "projected-807" to be "Succeeded or Failed"
Jun 22 22:16:00.969: INFO: Pod "pod-projected-secrets-441188a3-fd02-45df-8bec-c6bdf8bc68c2": Phase="Pending", Reason="", readiness=false. Elapsed: 34.552147ms
Jun 22 22:16:03.005: INFO: Pod "pod-projected-secrets-441188a3-fd02-45df-8bec-c6bdf8bc68c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070880708s
Jun 22 22:16:05.012: INFO: Pod "pod-projected-secrets-441188a3-fd02-45df-8bec-c6bdf8bc68c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077512961s
Jun 22 22:16:07.004: INFO: Pod "pod-projected-secrets-441188a3-fd02-45df-8bec-c6bdf8bc68c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069958065s
Jun 22 22:16:09.005: INFO: Pod "pod-projected-secrets-441188a3-fd02-45df-8bec-c6bdf8bc68c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07047753s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:09.005: INFO: Pod "pod-projected-secrets-441188a3-fd02-45df-8bec-c6bdf8bc68c2" satisfied condition "Succeeded or Failed"
Jun 22 22:16:09.040: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-projected-secrets-441188a3-fd02-45df-8bec-c6bdf8bc68c2 container projected-secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:09.122: INFO: Waiting for pod pod-projected-secrets-441188a3-fd02-45df-8bec-c6bdf8bc68c2 to disappear
Jun 22 22:16:09.160: INFO: Pod pod-projected-secrets-441188a3-fd02-45df-8bec-c6bdf8bc68c2 no longer exists
[AfterEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.652 seconds][0m
[sig-storage] Projected secret
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:09.270: INFO: Only supported for providers [aws] (not gce)
... skipping 125 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:16:09.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "pods-7830" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":7,"skipped":52,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:09.821: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 141 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:16:10.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "runtimeclass-6920" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]","total":-1,"completed":8,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:10.457: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 66 lines ...
Jun 22 22:16:10.533: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename pod-os-rejection
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should reject pod when the node OS doesn't match pod's OS
test/e2e/common/node/pod_admission.go:38
Jun 22 22:16:10.865: INFO: Waiting up to 2m0s for pod "wrong-pod-os" in namespace "pod-os-rejection-3735" to be "failed with reason PodOSNotSupported"
Jun 22 22:16:10.903: INFO: Pod "wrong-pod-os": Phase="Failed", Reason="PodOSNotSupported", readiness=false. Elapsed: 38.634057ms
Jun 22 22:16:10.903: INFO: Pod "wrong-pod-os" satisfied condition "failed with reason PodOSNotSupported"
[AfterEach] [sig-node] PodOSRejection [NodeConformance]
test/e2e/framework/framework.go:187
Jun 22 22:16:10.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "pod-os-rejection-3735" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS","total":-1,"completed":9,"skipped":95,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:16:08.749: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium
Jun 22 22:16:09.029: INFO: Waiting up to 5m0s for pod "pod-47c878d7-ec2e-4f83-b833-33298f7dfa42" in namespace "emptydir-6545" to be "Succeeded or Failed"
Jun 22 22:16:09.064: INFO: Pod "pod-47c878d7-ec2e-4f83-b833-33298f7dfa42": Phase="Pending", Reason="", readiness=false. Elapsed: 34.734378ms
Jun 22 22:16:11.100: INFO: Pod "pod-47c878d7-ec2e-4f83-b833-33298f7dfa42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070307459s
Jun 22 22:16:13.100: INFO: Pod "pod-47c878d7-ec2e-4f83-b833-33298f7dfa42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07028799s
Jun 22 22:16:15.100: INFO: Pod "pod-47c878d7-ec2e-4f83-b833-33298f7dfa42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070158192s
Jun 22 22:16:17.101: INFO: Pod "pod-47c878d7-ec2e-4f83-b833-33298f7dfa42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071025226s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:17.101: INFO: Pod "pod-47c878d7-ec2e-4f83-b833-33298f7dfa42" satisfied condition "Succeeded or Failed"
Jun 22 22:16:17.137: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod pod-47c878d7-ec2e-4f83-b833-33298f7dfa42 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:17.215: INFO: Waiting for pod pod-47c878d7-ec2e-4f83-b833-33298f7dfa42 to disappear
Jun 22 22:16:17.254: INFO: Pod pod-47c878d7-ec2e-4f83-b833-33298f7dfa42 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.587 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":66,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Services
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 15 lines ...
[1mSTEP[0m: Destroying namespace "services-1137" for this suite.
[AfterEach] [sig-network] Services
test/e2e/network/service.go:762
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":8,"skipped":69,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:17.953: INFO: Only supported for providers [vsphere] (not gce)
... skipping 281 lines ...
Jun 22 22:16:07.704: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:07.704: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/hostexec-nodes-us-east1-b-t83b-8knlc/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-7154%22%2Fhost%3B+mount+-t+tmpfs+e2e-mount-propagation-host+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-7154%22%2Fhost%3B+echo+host+%3E+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-7154%22%2Fhost%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 22 22:16:08.126: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7154 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:08.127: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:08.127: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:08.127: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:08.422: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Jun 22 22:16:08.457: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7154 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:08.457: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:08.458: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:08.458: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:08.743: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:08.778: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7154 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:08.778: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:08.779: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:08.779: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:09.053: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:09.087: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7154 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:09.087: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:09.088: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:09.088: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:09.360: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:09.394: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7154 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:09.394: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:09.395: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:09.395: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:09.656: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Jun 22 22:16:09.690: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7154 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:09.690: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:09.691: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:09.691: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:09.970: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Jun 22 22:16:10.005: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7154 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:10.005: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:10.006: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:10.006: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:10.280: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Jun 22 22:16:10.317: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7154 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:10.317: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:10.318: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:10.318: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:10.621: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:10.659: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7154 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:10.659: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:10.660: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:10.660: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:10.937: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:10.971: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7154 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:10.972: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:10.972: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:10.972: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:11.249: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Jun 22 22:16:11.285: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7154 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:11.285: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:11.286: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:11.286: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:11.552: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:11.587: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7154 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:11.587: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:11.588: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:11.588: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:11.851: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:11.887: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7154 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:11.887: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:11.888: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:11.888: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:12.172: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Jun 22 22:16:12.206: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7154 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:12.206: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:12.207: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:12.207: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:12.503: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:12.539: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7154 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:12.539: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:12.540: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:12.540: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:12.831: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:12.867: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7154 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:12.867: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:12.868: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:12.868: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:13.133: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:13.168: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7154 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:13.168: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:13.169: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:13.169: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:13.445: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:13.480: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7154 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:13.481: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:13.481: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:13.481: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:13.781: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:13.815: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7154 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:13.815: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:13.816: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:13.816: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:14.087: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Jun 22 22:16:14.122: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7154 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 22:16:14.122: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:14.122: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:14.122: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 22:16:14.416: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jun 22 22:16:14.416: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c pidof kubelet] Namespace:mount-propagation-7154 PodName:hostexec-nodes-us-east1-b-t83b-8knlc ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jun 22 22:16:14.416: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:14.417: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:14.417: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/mount-propagation-7154/pods/hostexec-nodes-us-east1-b-t83b-8knlc/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=pidof+kubelet&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 22 22:16:14.716: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 4823 -m cat "/var/lib/kubelet/mount-propagation-7154/host/file"] Namespace:mount-propagation-7154 PodName:hostexec-nodes-us-east1-b-t83b-8knlc ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jun 22 22:16:14.716: INFO: >>> kubeConfig: /root/.kube/config
... skipping 53 lines ...
[32m• [SLOW TEST:38.503 seconds][0m
[sig-node] Mount propagation
[90mtest/e2e/node/framework.go:23[0m
should propagate mounts within defined scopes
[90mtest/e2e/node/mount_propagation.go:85[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts within defined scopes","total":-1,"completed":3,"skipped":20,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:18.535: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: tmpfs]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 19 lines ...
[36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [sig-network] Networking
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:15:38.639: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename pod-network-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 99 lines ...
[90mtest/e2e/common/network/framework.go:23[0m
Granular Checks: Pods
[90mtest/e2e/common/network/networking.go:32[0m
should function for intra-pod communication: udp [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":0,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:21.087: INFO: Driver local doesn't support ext3 -- skipping
... skipping 47 lines ...
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
test/e2e/node/security_context.go:79
[1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 22 22:16:11.288: INFO: Waiting up to 5m0s for pod "security-context-b00706aa-abb6-4703-89b8-2f66012c7054" in namespace "security-context-6541" to be "Succeeded or Failed"
Jun 22 22:16:11.322: INFO: Pod "security-context-b00706aa-abb6-4703-89b8-2f66012c7054": Phase="Pending", Reason="", readiness=false. Elapsed: 33.690053ms
Jun 22 22:16:13.356: INFO: Pod "security-context-b00706aa-abb6-4703-89b8-2f66012c7054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06849216s
Jun 22 22:16:15.356: INFO: Pod "security-context-b00706aa-abb6-4703-89b8-2f66012c7054": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068518538s
Jun 22 22:16:17.357: INFO: Pod "security-context-b00706aa-abb6-4703-89b8-2f66012c7054": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069494645s
Jun 22 22:16:19.358: INFO: Pod "security-context-b00706aa-abb6-4703-89b8-2f66012c7054": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070191737s
Jun 22 22:16:21.358: INFO: Pod "security-context-b00706aa-abb6-4703-89b8-2f66012c7054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069726861s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:21.358: INFO: Pod "security-context-b00706aa-abb6-4703-89b8-2f66012c7054" satisfied condition "Succeeded or Failed"
Jun 22 22:16:21.392: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod security-context-b00706aa-abb6-4703-89b8-2f66012c7054 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:21.472: INFO: Waiting for pod security-context-b00706aa-abb6-4703-89b8-2f66012c7054 to disappear
Jun 22 22:16:21.506: INFO: Pod security-context-b00706aa-abb6-4703-89b8-2f66012c7054 no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.587 seconds][0m
[sig-node] Security Context
[90mtest/e2e/node/framework.go:23[0m
should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
[90mtest/e2e/node/security_context.go:79[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":10,"skipped":96,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-architecture] Conformance Tests
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 9 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:16:21.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "conformance-tests-6693" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]","total":-1,"completed":11,"skipped":98,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:22.046: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 28 lines ...
[1mSTEP[0m: Destroying namespace "node-problem-detector-8646" for this suite.
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.320 seconds][0m
[sig-node] NodeProblemDetector
[90mtest/e2e/node/framework.go:23[0m
[36m[1mshould run without error [BeforeEach][0m
[90mtest/e2e/node/node_problem_detector.go:62[0m
[36mOnly supported for node OS distro [gci ubuntu] (not debian)[0m
test/e2e/node/node_problem_detector.go:58
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":56,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:16:07.423: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 95 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":56,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:23.101: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 51 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with configmap pod [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-configmap-p5p7
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 22:15:55.068: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p5p7" in namespace "subpath-8696" to be "Succeeded or Failed"
Jun 22 22:15:55.104: INFO: Pod "pod-subpath-test-configmap-p5p7": Phase="Pending", Reason="", readiness=false. Elapsed: 35.568234ms
Jun 22 22:15:57.139: INFO: Pod "pod-subpath-test-configmap-p5p7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070968024s
Jun 22 22:15:59.141: INFO: Pod "pod-subpath-test-configmap-p5p7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072927529s
Jun 22 22:16:01.140: INFO: Pod "pod-subpath-test-configmap-p5p7": Phase="Running", Reason="", readiness=true. Elapsed: 6.071593461s
Jun 22 22:16:03.140: INFO: Pod "pod-subpath-test-configmap-p5p7": Phase="Running", Reason="", readiness=true. Elapsed: 8.071638271s
Jun 22 22:16:05.140: INFO: Pod "pod-subpath-test-configmap-p5p7": Phase="Running", Reason="", readiness=true. Elapsed: 10.07184246s
... skipping 4 lines ...
Jun 22 22:16:15.141: INFO: Pod "pod-subpath-test-configmap-p5p7": Phase="Running", Reason="", readiness=true. Elapsed: 20.072291421s
Jun 22 22:16:17.139: INFO: Pod "pod-subpath-test-configmap-p5p7": Phase="Running", Reason="", readiness=true. Elapsed: 22.07056495s
Jun 22 22:16:19.145: INFO: Pod "pod-subpath-test-configmap-p5p7": Phase="Running", Reason="", readiness=true. Elapsed: 24.07622278s
Jun 22 22:16:21.140: INFO: Pod "pod-subpath-test-configmap-p5p7": Phase="Running", Reason="", readiness=true. Elapsed: 26.071593915s
Jun 22 22:16:23.140: INFO: Pod "pod-subpath-test-configmap-p5p7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.071984378s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:23.140: INFO: Pod "pod-subpath-test-configmap-p5p7" satisfied condition "Succeeded or Failed"
Jun 22 22:16:23.181: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-subpath-test-configmap-p5p7 container test-container-subpath-configmap-p5p7: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:23.285: INFO: Waiting for pod pod-subpath-test-configmap-p5p7 to disappear
Jun 22 22:16:23.324: INFO: Pod pod-subpath-test-configmap-p5p7 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-configmap-p5p7
Jun 22 22:16:23.324: INFO: Deleting pod "pod-subpath-test-configmap-p5p7" in namespace "subpath-8696"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with configmap pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]","total":-1,"completed":6,"skipped":49,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:23.465: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 140 lines ...
Jun 22 22:16:08.958: INFO: PersistentVolumeClaim pvc-l5szc found but phase is Pending instead of Bound.
Jun 22 22:16:10.998: INFO: PersistentVolumeClaim pvc-l5szc found and phase=Bound (10.22174802s)
Jun 22 22:16:10.998: INFO: Waiting up to 3m0s for PersistentVolume local-ljkgc to have phase Bound
Jun 22 22:16:11.033: INFO: PersistentVolume local-ljkgc found and phase=Bound (34.832859ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-c7qq
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:16:11.141: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c7qq" in namespace "provisioning-4655" to be "Succeeded or Failed"
Jun 22 22:16:11.175: INFO: Pod "pod-subpath-test-preprovisionedpv-c7qq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.090128ms
Jun 22 22:16:13.212: INFO: Pod "pod-subpath-test-preprovisionedpv-c7qq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070693197s
Jun 22 22:16:15.213: INFO: Pod "pod-subpath-test-preprovisionedpv-c7qq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072051655s
Jun 22 22:16:17.210: INFO: Pod "pod-subpath-test-preprovisionedpv-c7qq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069299877s
Jun 22 22:16:19.212: INFO: Pod "pod-subpath-test-preprovisionedpv-c7qq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071317588s
Jun 22 22:16:21.213: INFO: Pod "pod-subpath-test-preprovisionedpv-c7qq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07240074s
Jun 22 22:16:23.220: INFO: Pod "pod-subpath-test-preprovisionedpv-c7qq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.078662442s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:23.220: INFO: Pod "pod-subpath-test-preprovisionedpv-c7qq" satisfied condition "Succeeded or Failed"
Jun 22 22:16:23.256: INFO: Trying to get logs from node nodes-us-east1-b-vf6p pod pod-subpath-test-preprovisionedpv-c7qq container test-container-subpath-preprovisionedpv-c7qq: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:23.349: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c7qq to disappear
Jun 22 22:16:23.384: INFO: Pod pod-subpath-test-preprovisionedpv-c7qq no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-c7qq
Jun 22 22:16:23.384: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c7qq" in namespace "provisioning-4655"
... skipping 30 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90mtest/e2e/storage/testsuites/subpath.go:367[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":15,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap configmap-78/configmap-test-274e0f1f-5abc-4df4-8ae3-83fc01a0d53d
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 22:16:18.879: INFO: Waiting up to 5m0s for pod "pod-configmaps-05c0ff5c-a1c2-4499-84c8-278e1b536001" in namespace "configmap-78" to be "Succeeded or Failed"
Jun 22 22:16:18.913: INFO: Pod "pod-configmaps-05c0ff5c-a1c2-4499-84c8-278e1b536001": Phase="Pending", Reason="", readiness=false. Elapsed: 34.174077ms
Jun 22 22:16:20.948: INFO: Pod "pod-configmaps-05c0ff5c-a1c2-4499-84c8-278e1b536001": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069019831s
Jun 22 22:16:22.950: INFO: Pod "pod-configmaps-05c0ff5c-a1c2-4499-84c8-278e1b536001": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070762875s
Jun 22 22:16:24.948: INFO: Pod "pod-configmaps-05c0ff5c-a1c2-4499-84c8-278e1b536001": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069015159s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:24.948: INFO: Pod "pod-configmaps-05c0ff5c-a1c2-4499-84c8-278e1b536001" satisfied condition "Succeeded or Failed"
Jun 22 22:16:24.982: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-configmaps-05c0ff5c-a1c2-4499-84c8-278e1b536001 container env-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:25.065: INFO: Waiting for pod pod-configmaps-05c0ff5c-a1c2-4499-84c8-278e1b536001 to disappear
Jun 22 22:16:25.100: INFO: Pod pod-configmaps-05c0ff5c-a1c2-4499-84c8-278e1b536001 no longer exists
[AfterEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.611 seconds][0m
[sig-node] ConfigMap
[90mtest/e2e/common/node/framework.go:23[0m
should be consumable via the environment [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:25.186: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 128 lines ...
Jun 22 22:15:56.148: INFO: Pod "pvc-volume-tester-gn9n9": Phase="Running", Reason="", readiness=true. Elapsed: 16.073345986s
Jun 22 22:15:56.148: INFO: Pod "pvc-volume-tester-gn9n9" satisfied condition "running"
[1mSTEP[0m: Deleting the previously created pod
Jun 22 22:15:56.148: INFO: Deleting pod "pvc-volume-tester-gn9n9" in namespace "csi-mock-volumes-581"
Jun 22 22:15:56.186: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gn9n9" to be fully deleted
[1mSTEP[0m: Checking CSI driver logs
Jun 22 22:16:04.312: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"d8345220-f278-11ec-8b92-72945cc6a6a3","target_path":"/var/lib/kubelet/pods/e3f89e81-fc10-40a5-89c4-eab1a3d26fce/volumes/kubernetes.io~csi/pvc-541df46b-97ff-4c66-b5f4-7bc6dc9f948b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
[1mSTEP[0m: Deleting pod pvc-volume-tester-gn9n9
Jun 22 22:16:04.312: INFO: Deleting pod "pvc-volume-tester-gn9n9" in namespace "csi-mock-volumes-581"
[1mSTEP[0m: Deleting claim pvc-tpccj
Jun 22 22:16:04.419: INFO: Waiting up to 2m0s for PersistentVolume pvc-541df46b-97ff-4c66-b5f4-7bc6dc9f948b to get deleted
Jun 22 22:16:04.458: INFO: PersistentVolume pvc-541df46b-97ff-4c66-b5f4-7bc6dc9f948b found and phase=Released (39.004773ms)
Jun 22 22:16:06.494: INFO: PersistentVolume pvc-541df46b-97ff-4c66-b5f4-7bc6dc9f948b was removed
... skipping 44 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSIServiceAccountToken
[90mtest/e2e/storage/csi_mock_volume.go:1574[0m
token should not be plumbed down when CSIDriver is not deployed
[90mtest/e2e/storage/csi_mock_volume.go:1602[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":3,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:25.899: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 44 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
test/e2e/common/storage/empty_dir.go:75
[1mSTEP[0m: Creating a pod to test emptydir volume type on tmpfs
Jun 22 22:16:18.345: INFO: Waiting up to 5m0s for pod "pod-151dca4e-6251-4b41-996d-b05b05984807" in namespace "emptydir-5909" to be "Succeeded or Failed"
Jun 22 22:16:18.379: INFO: Pod "pod-151dca4e-6251-4b41-996d-b05b05984807": Phase="Pending", Reason="", readiness=false. Elapsed: 33.860827ms
Jun 22 22:16:20.413: INFO: Pod "pod-151dca4e-6251-4b41-996d-b05b05984807": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068180531s
Jun 22 22:16:22.416: INFO: Pod "pod-151dca4e-6251-4b41-996d-b05b05984807": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071108831s
Jun 22 22:16:24.415: INFO: Pod "pod-151dca4e-6251-4b41-996d-b05b05984807": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069725626s
Jun 22 22:16:26.414: INFO: Pod "pod-151dca4e-6251-4b41-996d-b05b05984807": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068939167s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:26.414: INFO: Pod "pod-151dca4e-6251-4b41-996d-b05b05984807" satisfied condition "Succeeded or Failed"
Jun 22 22:16:26.449: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod pod-151dca4e-6251-4b41-996d-b05b05984807 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:26.528: INFO: Waiting for pod pod-151dca4e-6251-4b41-996d-b05b05984807 to disappear
Jun 22 22:16:26.563: INFO: Pod pod-151dca4e-6251-4b41-996d-b05b05984807 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 6 lines ...
[90mtest/e2e/common/storage/framework.go:23[0m
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/empty_dir.go:48[0m
volume on tmpfs should have the correct mode using FSGroup
[90mtest/e2e/common/storage/empty_dir.go:75[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":9,"skipped":94,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:26.657: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/framework/framework.go:187
... skipping 23 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/downwardapi_volume.go:93
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 22 22:16:21.440: INFO: Waiting up to 5m0s for pod "metadata-volume-33fd3b12-220f-4a00-af98-4ff3dd791ca5" in namespace "downward-api-4634" to be "Succeeded or Failed"
Jun 22 22:16:21.474: INFO: Pod "metadata-volume-33fd3b12-220f-4a00-af98-4ff3dd791ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 33.742523ms
Jun 22 22:16:23.509: INFO: Pod "metadata-volume-33fd3b12-220f-4a00-af98-4ff3dd791ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068541283s
Jun 22 22:16:25.510: INFO: Pod "metadata-volume-33fd3b12-220f-4a00-af98-4ff3dd791ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069290264s
Jun 22 22:16:27.510: INFO: Pod "metadata-volume-33fd3b12-220f-4a00-af98-4ff3dd791ca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069403405s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:27.510: INFO: Pod "metadata-volume-33fd3b12-220f-4a00-af98-4ff3dd791ca5" satisfied condition "Succeeded or Failed"
Jun 22 22:16:27.547: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod metadata-volume-33fd3b12-220f-4a00-af98-4ff3dd791ca5 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:27.652: INFO: Waiting for pod metadata-volume-33fd3b12-220f-4a00-af98-4ff3dd791ca5 to disappear
Jun 22 22:16:27.686: INFO: Pod metadata-volume-33fd3b12-220f-4a00-af98-4ff3dd791ca5 no longer exists
[AfterEach] [sig-storage] Downward API volume
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.612 seconds][0m
[sig-storage] Downward API volume
[90mtest/e2e/common/storage/framework.go:23[0m
should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/downwardapi_volume.go:93[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:27.780: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 9 lines ...
[90mtest/e2e/storage/testsuites/ephemeral.go:277[0m
[36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:15:27.472: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename csi-mock-volumes
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 112 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI online volume expansion
[90mtest/e2e/storage/csi_mock_volume.go:750[0m
should expand volume without restarting pod if attach=on, nodeExpansion=on
[90mtest/e2e/storage/csi_mock_volume.go:765[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":5,"skipped":26,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:27.826: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 70 lines ...
Jun 22 22:16:09.621: INFO: PersistentVolumeClaim pvc-64b2z found but phase is Pending instead of Bound.
Jun 22 22:16:11.657: INFO: PersistentVolumeClaim pvc-64b2z found and phase=Bound (12.255276803s)
Jun 22 22:16:11.658: INFO: Waiting up to 3m0s for PersistentVolume local-r52pn to have phase Bound
Jun 22 22:16:11.694: INFO: PersistentVolume local-r52pn found and phase=Bound (36.022258ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-nxzj
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:16:11.798: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nxzj" in namespace "provisioning-5375" to be "Succeeded or Failed"
Jun 22 22:16:11.832: INFO: Pod "pod-subpath-test-preprovisionedpv-nxzj": Phase="Pending", Reason="", readiness=false. Elapsed: 33.88379ms
Jun 22 22:16:13.869: INFO: Pod "pod-subpath-test-preprovisionedpv-nxzj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07014462s
Jun 22 22:16:15.868: INFO: Pod "pod-subpath-test-preprovisionedpv-nxzj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069319112s
Jun 22 22:16:17.869: INFO: Pod "pod-subpath-test-preprovisionedpv-nxzj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07085408s
Jun 22 22:16:19.868: INFO: Pod "pod-subpath-test-preprovisionedpv-nxzj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069110104s
Jun 22 22:16:21.869: INFO: Pod "pod-subpath-test-preprovisionedpv-nxzj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.070949316s
Jun 22 22:16:23.876: INFO: Pod "pod-subpath-test-preprovisionedpv-nxzj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077242795s
Jun 22 22:16:25.870: INFO: Pod "pod-subpath-test-preprovisionedpv-nxzj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.07142591s
Jun 22 22:16:27.873: INFO: Pod "pod-subpath-test-preprovisionedpv-nxzj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.074654244s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:27.873: INFO: Pod "pod-subpath-test-preprovisionedpv-nxzj" satisfied condition "Succeeded or Failed"
Jun 22 22:16:27.911: INFO: Trying to get logs from node nodes-us-east1-b-vf6p pod pod-subpath-test-preprovisionedpv-nxzj container test-container-subpath-preprovisionedpv-nxzj: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:28.007: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nxzj to disappear
Jun 22 22:16:28.042: INFO: Pod pod-subpath-test-preprovisionedpv-nxzj no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-nxzj
Jun 22 22:16:28.043: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nxzj" in namespace "provisioning-5375"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":28,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:28.669: INFO: Only supported for providers [azure] (not gce)
... skipping 49 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: hostPathSymlink]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 110 lines ...
Jun 22 22:15:45.194: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 22:15:47.192: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.071380509s
Jun 22 22:15:47.192: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 22:15:49.199: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 12.078326583s
Jun 22 22:15:49.199: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 22 22:15:49.199: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 22 22:15:49.199: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=services-9672 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.198.93:80 && echo service-down-failed'
Jun 22 22:15:51.687: INFO: rc: 28
Jun 22 22:15:51.688: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.70.198.93:80 && echo service-down-failed" in pod services-9672/verify-service-down-host-exec-pod: error running /logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=services-9672 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.198.93:80 && echo service-down-failed:
Command stdout:
stderr:
+ curl -g -s --connect-timeout 2 http://100.70.198.93:80
command terminated with exit code 28
error:
exit status 28
Output:
[1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-9672
[1mSTEP[0m: adding service.kubernetes.io/headless label
[1mSTEP[0m: verifying service is not up
Jun 22 22:15:51.813: INFO: Creating new host exec pod
... skipping 4 lines ...
Jun 22 22:15:53.971: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 22:15:55.972: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100291119s
Jun 22 22:15:55.972: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 22:15:57.968: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.096782354s
Jun 22 22:15:57.968: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 22 22:15:57.968: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 22 22:15:57.968: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=services-9672 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.250.168:80 && echo service-down-failed'
Jun 22 22:16:00.444: INFO: rc: 28
Jun 22 22:16:00.444: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.65.250.168:80 && echo service-down-failed" in pod services-9672/verify-service-down-host-exec-pod: error running /logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=services-9672 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.250.168:80 && echo service-down-failed:
Command stdout:
stderr:
+ curl -g -s --connect-timeout 2 http://100.65.250.168:80
command terminated with exit code 28
error:
exit status 28
Output:
[1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-9672
[1mSTEP[0m: removing service.kubernetes.io/headless annotation
[1mSTEP[0m: verifying service is up
Jun 22 22:16:00.593: INFO: Creating new host exec pod
... skipping 40 lines ...
Jun 22 22:16:22.349: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 22:16:24.344: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074213428s
Jun 22 22:16:24.344: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 22:16:26.343: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.073273176s
Jun 22 22:16:26.343: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 22 22:16:26.343: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 22 22:16:26.343: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=services-9672 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.198.93:80 && echo service-down-failed'
Jun 22 22:16:28.812: INFO: rc: 28
Jun 22 22:16:28.812: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.70.198.93:80 && echo service-down-failed" in pod services-9672/verify-service-down-host-exec-pod: error running /logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=services-9672 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.198.93:80 && echo service-down-failed:
Command stdout:
stderr:
+ curl -g -s --connect-timeout 2 http://100.70.198.93:80
command terminated with exit code 28
error:
exit status 28
Output:
[1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-9672
[AfterEach] [sig-network] Services
test/e2e/framework/framework.go:187
Jun 22 22:16:28.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
[32m• [SLOW TEST:121.364 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should implement service.kubernetes.io/headless
[90mtest/e2e/network/service.go:2207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":1,"skipped":5,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:29.124: INFO: Only supported for providers [aws] (not gce)
... skipping 23 lines ...
Jun 22 22:16:29.141: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename volume-provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
test/e2e/storage/volume_provisioning.go:146
[It] should report an error and create no PV
test/e2e/storage/volume_provisioning.go:743
Jun 22 22:16:29.423: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [sig-storage] Dynamic Provisioning
test/e2e/framework/framework.go:187
Jun 22 22:16:29.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-provisioning-7749" for this suite.
[36m[1mS [SKIPPING] [0.395 seconds][0m
[sig-storage] Dynamic Provisioning
[90mtest/e2e/storage/utils/framework.go:23[0m
Invalid AWS KMS key
[90mtest/e2e/storage/volume_provisioning.go:742[0m
[36m[1mshould report an error and create no PV [It][0m
[90mtest/e2e/storage/volume_provisioning.go:743[0m
[36mOnly supported for providers [aws] (not gce)[0m
test/e2e/storage/volume_provisioning.go:744
[90m------------------------------[0m
... skipping 26 lines ...
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:16:27.791: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename job
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail when exceeds active deadline
test/e2e/apps/job.go:293
[1mSTEP[0m: Creating a job
[1mSTEP[0m: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
test/e2e/framework/framework.go:187
Jun 22 22:16:30.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "job-2259" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":4,"skipped":24,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:30.204: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 106 lines ...
Jun 22 22:14:41.615: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2501
Jun 22 22:14:41.651: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2501
Jun 22 22:14:41.690: INFO: creating *v1.StatefulSet: csi-mock-volumes-2501-7171/csi-mockplugin
Jun 22 22:14:41.729: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2501
Jun 22 22:14:41.771: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2501"
Jun 22 22:14:41.806: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2501 to register on node nodes-us-east1-b-vf6p
I0622 22:14:59.221693 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0622 22:14:59.256406 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2501","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0622 22:14:59.299333 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0622 22:14:59.334686 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0622 22:14:59.407641 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2501","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0622 22:14:59.801439 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2501"},"Error":"","FullError":null}
[1mSTEP[0m: Creating pod
Jun 22 22:15:08.478: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 22 22:15:08.568: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-5qc2k" in namespace "csi-mock-volumes-2501" to be "running"
Jun 22 22:15:08.602: INFO: Pod "pvc-volume-tester-5qc2k": Phase="Pending", Reason="", readiness=false. Elapsed: 34.61935ms
I0622 22:15:08.603725 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0622 22:15:09.657144 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4"}}},"Error":"","FullError":null}
Jun 22 22:15:10.645: INFO: Pod "pvc-volume-tester-5qc2k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077391357s
Jun 22 22:15:12.637: INFO: Pod "pvc-volume-tester-5qc2k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069271793s
I0622 22:15:12.678910 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:15:12.713828 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:15:12.748440 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 22 22:15:12.783: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:15:12.784: INFO: ExecWithOptions: Clientset creation
Jun 22 22:15:12.784: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-2501-7171/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-2501%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-2501%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
I0622 22:15:13.063378 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-2501/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4","storage.kubernetes.io/csiProvisionerIdentity":"1655936099351-8081-csi-mock-csi-mock-volumes-2501"}},"Response":{},"Error":"","FullError":null}
I0622 22:15:13.099091 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:15:13.134324 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:15:13.169176 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 22 22:15:13.202: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:15:13.203: INFO: ExecWithOptions: Clientset creation
Jun 22 22:15:13.204: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-2501-7171/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Ff0cc7df6-2fdf-40c3-a8fd-60d9b16fc4d9%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Ff0cc7df6-2fdf-40c3-a8fd-60d9b16fc4d9%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 22 22:15:13.460: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:15:13.461: INFO: ExecWithOptions: Clientset creation
Jun 22 22:15:13.461: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-2501-7171/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Ff0cc7df6-2fdf-40c3-a8fd-60d9b16fc4d9%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Ff0cc7df6-2fdf-40c3-a8fd-60d9b16fc4d9%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 22 22:15:13.723: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:15:13.724: INFO: ExecWithOptions: Clientset creation
Jun 22 22:15:13.724: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-2501-7171/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2Ff0cc7df6-2fdf-40c3-a8fd-60d9b16fc4d9%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0622 22:15:13.985248 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-2501/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","target_path":"/var/lib/kubelet/pods/f0cc7df6-2fdf-40c3-a8fd-60d9b16fc4d9/volumes/kubernetes.io~csi/pvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4","storage.kubernetes.io/csiProvisionerIdentity":"1655936099351-8081-csi-mock-csi-mock-volumes-2501"}},"Response":{},"Error":"","FullError":null}
Jun 22 22:15:14.637: INFO: Pod "pvc-volume-tester-5qc2k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06970403s
Jun 22 22:15:16.638: INFO: Pod "pvc-volume-tester-5qc2k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0707588s
Jun 22 22:15:18.639: INFO: Pod "pvc-volume-tester-5qc2k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07150319s
Jun 22 22:15:20.639: INFO: Pod "pvc-volume-tester-5qc2k": Phase="Pending", Reason="", readiness=false. Elapsed: 12.07099983s
Jun 22 22:15:22.638: INFO: Pod "pvc-volume-tester-5qc2k": Phase="Running", Reason="", readiness=true. Elapsed: 14.069924231s
Jun 22 22:15:22.638: INFO: Pod "pvc-volume-tester-5qc2k" satisfied condition "running"
Jun 22 22:15:22.638: INFO: Deleting pod "pvc-volume-tester-5qc2k" in namespace "csi-mock-volumes-2501"
Jun 22 22:15:22.674: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5qc2k" to be fully deleted
Jun 22 22:15:23.105: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:15:23.106: INFO: ExecWithOptions: Clientset creation
Jun 22 22:15:23.107: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-2501-7171/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2Ff0cc7df6-2fdf-40c3-a8fd-60d9b16fc4d9%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0622 22:15:23.389405 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/f0cc7df6-2fdf-40c3-a8fd-60d9b16fc4d9/volumes/kubernetes.io~csi/pvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4/mount"},"Response":{},"Error":"","FullError":null}
I0622 22:15:23.511242 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:15:23.547051 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-2501/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount"},"Response":{},"Error":"","FullError":null}
I0622 22:15:30.803044 7091 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
[1mSTEP[0m: Checking PVC events
Jun 22 22:15:31.781: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-pptsb", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2501", SelfLink:"", UID:"44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4", ResourceVersion:"3275", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001848810), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00300d770), VolumeMode:(*v1.PersistentVolumeMode)(0xc00300d780), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 22:15:31.781: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-pptsb", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2501", SelfLink:"", UID:"44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4", ResourceVersion:"3277", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"nodes-us-east1-b-vf6p"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001848a98), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001848ac8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00300d8f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00300d900), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 22:15:31.781: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-pptsb", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2501", SelfLink:"", UID:"44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4", ResourceVersion:"3278", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2501", "volume.kubernetes.io/selected-node":"nodes-us-east1-b-vf6p", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2501"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00373c768), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00373c7b0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00373c7e0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00248adc0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00248ade0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 22:15:31.781: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-pptsb", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2501", SelfLink:"", UID:"44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4", ResourceVersion:"3336", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2501", "volume.kubernetes.io/selected-node":"nodes-us-east1-b-vf6p", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2501"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001abe030), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001abe060), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 9, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001abe090), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4", StorageClassName:(*string)(0xc002c10020), VolumeMode:(*v1.PersistentVolumeMode)(0xc002c10030), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 22:15:31.781: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-pptsb", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2501", SelfLink:"", UID:"44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4", ResourceVersion:"3338", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2501", "volume.kubernetes.io/selected-node":"nodes-us-east1-b-vf6p", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2501"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001abe0f0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001abe120), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 9, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001abe150), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 15, 9, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001abe180), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-44df1c9d-0bcb-4ba4-b0a5-3d3b6b2ef6e4", StorageClassName:(*string)(0xc002c10070), VolumeMode:(*v1.PersistentVolumeMode)(0xc002c10080), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
... skipping 49 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
storage capacity
[90mtest/e2e/storage/csi_mock_volume.go:1100[0m
exhausted, late binding, no topology
[90mtest/e2e/storage/csi_mock_volume.go:1158[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:30.553: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/framework/framework.go:187
... skipping 222 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":34,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 10 lines ...
Jun 22 22:16:33.285: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 22 22:16:33.285: INFO: stdout: "etcd-0 controller-manager etcd-1 scheduler"
[1mSTEP[0m: getting details of componentstatuses
[1mSTEP[0m: getting status of etcd-0
Jun 22 22:16:33.285: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-9492 get componentstatuses etcd-0'
Jun 22 22:16:33.458: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 22 22:16:33.459: INFO: stdout: "NAME STATUS MESSAGE ERROR\netcd-0 Healthy {\"health\":\"true\",\"reason\":\"\"} \n"
[1mSTEP[0m: getting status of controller-manager
Jun 22 22:16:33.459: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-9492 get componentstatuses controller-manager'
Jun 22 22:16:33.642: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 22 22:16:33.642: INFO: stdout: "NAME STATUS MESSAGE ERROR\ncontroller-manager Healthy ok \n"
[1mSTEP[0m: getting status of etcd-1
Jun 22 22:16:33.642: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-9492 get componentstatuses etcd-1'
Jun 22 22:16:33.819: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 22 22:16:33.819: INFO: stdout: "NAME STATUS MESSAGE ERROR\netcd-1 Healthy {\"health\":\"true\",\"reason\":\"\"} \n"
[1mSTEP[0m: getting status of scheduler
Jun 22 22:16:33.819: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-9492 get componentstatuses scheduler'
Jun 22 22:16:34.000: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 22 22:16:34.001: INFO: stdout: "NAME STATUS MESSAGE ERROR\nscheduler Healthy ok \n"
[AfterEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:187
Jun 22 22:16:34.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-9492" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":6,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:34.091: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 42 lines ...
Jun 22 22:16:09.034: INFO: PersistentVolumeClaim pvc-mzsr2 found but phase is Pending instead of Bound.
Jun 22 22:16:11.068: INFO: PersistentVolumeClaim pvc-mzsr2 found and phase=Bound (4.10529862s)
Jun 22 22:16:11.068: INFO: Waiting up to 3m0s for PersistentVolume local-c695x to have phase Bound
Jun 22 22:16:11.102: INFO: PersistentVolume local-c695x found and phase=Bound (33.601182ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-9zsj
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:16:11.208: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9zsj" in namespace "provisioning-1248" to be "Succeeded or Failed"
Jun 22 22:16:11.242: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 33.723811ms
Jun 22 22:16:13.278: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069475887s
Jun 22 22:16:15.278: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069378547s
Jun 22 22:16:17.280: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071153857s
Jun 22 22:16:19.281: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072660426s
Jun 22 22:16:21.278: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069159877s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:21.278: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj" satisfied condition "Succeeded or Failed"
Jun 22 22:16:21.312: INFO: Trying to get logs from node nodes-us-east1-b-vf6p pod pod-subpath-test-preprovisionedpv-9zsj container test-container-subpath-preprovisionedpv-9zsj: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:21.389: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9zsj to disappear
Jun 22 22:16:21.422: INFO: Pod pod-subpath-test-preprovisionedpv-9zsj no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-9zsj
Jun 22 22:16:21.422: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9zsj" in namespace "provisioning-1248"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-9zsj
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:16:21.492: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9zsj" in namespace "provisioning-1248" to be "Succeeded or Failed"
Jun 22 22:16:21.526: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 33.974168ms
Jun 22 22:16:23.561: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068866182s
Jun 22 22:16:25.567: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075008962s
Jun 22 22:16:27.565: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072800038s
Jun 22 22:16:29.565: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072904363s
Jun 22 22:16:31.560: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067851509s
Jun 22 22:16:33.564: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.072194429s
Jun 22 22:16:35.564: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.072371215s
Jun 22 22:16:37.560: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.068499493s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:37.560: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsj" satisfied condition "Succeeded or Failed"
Jun 22 22:16:37.596: INFO: Trying to get logs from node nodes-us-east1-b-vf6p pod pod-subpath-test-preprovisionedpv-9zsj container test-container-subpath-preprovisionedpv-9zsj: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:37.693: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9zsj to disappear
Jun 22 22:16:37.731: INFO: Pod pod-subpath-test-preprovisionedpv-9zsj no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-9zsj
Jun 22 22:16:37.731: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9zsj" in namespace "provisioning-1248"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":23,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 95 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":29,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:39.479: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
... skipping 23 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/configmap_volume.go:112
[1mSTEP[0m: Creating configMap with name configmap-test-volume-map-9fe8fc79-cadf-4ecc-bc25-a6ef21ba93b6
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 22:16:23.869: INFO: Waiting up to 5m0s for pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc" in namespace "configmap-2338" to be "Succeeded or Failed"
Jun 22 22:16:23.909: INFO: Pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 40.372122ms
Jun 22 22:16:25.947: INFO: Pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077710793s
Jun 22 22:16:27.952: INFO: Pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083382702s
Jun 22 22:16:29.947: INFO: Pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078089408s
Jun 22 22:16:31.945: INFO: Pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076337565s
Jun 22 22:16:33.945: INFO: Pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075841518s
Jun 22 22:16:35.946: INFO: Pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077627308s
Jun 22 22:16:37.949: INFO: Pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.079956554s
Jun 22 22:16:39.945: INFO: Pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.075693004s
Jun 22 22:16:41.945: INFO: Pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.076608576s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:41.946: INFO: Pod "pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc" satisfied condition "Succeeded or Failed"
Jun 22 22:16:41.980: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:42.063: INFO: Waiting for pod pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc to disappear
Jun 22 22:16:42.099: INFO: Pod pod-configmaps-3fdab616-01b7-41c9-ba94-422cc7cc1ccc no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:18.629 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/configmap_volume.go:112[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":70,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:42.200: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-bd50ab04-f3e3-460f-ac62-e8e714370280
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 22:16:25.920: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f" in namespace "projected-1690" to be "Succeeded or Failed"
Jun 22 22:16:25.954: INFO: Pod "pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f": Phase="Pending", Reason="", readiness=false. Elapsed: 33.93256ms
Jun 22 22:16:27.990: INFO: Pod "pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069304547s
Jun 22 22:16:29.991: INFO: Pod "pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070864529s
Jun 22 22:16:31.991: INFO: Pod "pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070540303s
Jun 22 22:16:33.996: INFO: Pod "pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075564436s
Jun 22 22:16:35.989: INFO: Pod "pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069062903s
Jun 22 22:16:37.991: INFO: Pod "pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.070971087s
Jun 22 22:16:39.994: INFO: Pod "pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.073923635s
Jun 22 22:16:41.990: INFO: Pod "pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.070171055s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:41.991: INFO: Pod "pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f" satisfied condition "Succeeded or Failed"
Jun 22 22:16:42.025: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:42.110: INFO: Waiting for pod pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f to disappear
Jun 22 22:16:42.147: INFO: Pod pod-projected-configmaps-9fee03f5-7889-4fc8-a5fd-dbf924c2b53f no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 6 lines ...
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":40,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:42.254: INFO: Only supported for providers [azure] (not gce)
... skipping 12 lines ...
[90mtest/e2e/storage/testsuites/volumes.go:198[0m
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:2079
[90m------------------------------[0m
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":6,"skipped":34,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:16:28.300: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename gc
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 185 lines ...
[32m• [SLOW TEST:49.046 seconds][0m
[sig-network] Conntrack
[90mtest/e2e/network/common/framework.go:23[0m
should be able to preserve UDP traffic when initial unready endpoints get ready
[90mtest/e2e/network/conntrack.go:295[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready","total":-1,"completed":3,"skipped":63,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:50.263: INFO: Only supported for providers [azure] (not gce)
... skipping 43 lines ...
Jun 22 22:16:41.981: INFO: The phase of Pod server-envvars-9dca0bfc-1575-48fc-8002-6218b4268ade is Pending, waiting for it to be Running (with Ready = true)
Jun 22 22:16:43.983: INFO: Pod "server-envvars-9dca0bfc-1575-48fc-8002-6218b4268ade": Phase="Pending", Reason="", readiness=false. Elapsed: 14.07732069s
Jun 22 22:16:43.983: INFO: The phase of Pod server-envvars-9dca0bfc-1575-48fc-8002-6218b4268ade is Pending, waiting for it to be Running (with Ready = true)
Jun 22 22:16:45.978: INFO: Pod "server-envvars-9dca0bfc-1575-48fc-8002-6218b4268ade": Phase="Running", Reason="", readiness=true. Elapsed: 16.07255683s
Jun 22 22:16:45.978: INFO: The phase of Pod server-envvars-9dca0bfc-1575-48fc-8002-6218b4268ade is Running (Ready = true)
Jun 22 22:16:45.978: INFO: Pod "server-envvars-9dca0bfc-1575-48fc-8002-6218b4268ade" satisfied condition "running and ready"
Jun 22 22:16:46.105: INFO: Waiting up to 5m0s for pod "client-envvars-bc0b68da-9014-4120-aa65-e025b3abb117" in namespace "pods-9168" to be "Succeeded or Failed"
Jun 22 22:16:46.142: INFO: Pod "client-envvars-bc0b68da-9014-4120-aa65-e025b3abb117": Phase="Pending", Reason="", readiness=false. Elapsed: 36.353814ms
Jun 22 22:16:48.178: INFO: Pod "client-envvars-bc0b68da-9014-4120-aa65-e025b3abb117": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072653513s
Jun 22 22:16:50.181: INFO: Pod "client-envvars-bc0b68da-9014-4120-aa65-e025b3abb117": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075229159s
Jun 22 22:16:52.177: INFO: Pod "client-envvars-bc0b68da-9014-4120-aa65-e025b3abb117": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071575808s
Jun 22 22:16:54.180: INFO: Pod "client-envvars-bc0b68da-9014-4120-aa65-e025b3abb117": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074484697s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:54.180: INFO: Pod "client-envvars-bc0b68da-9014-4120-aa65-e025b3abb117" satisfied condition "Succeeded or Failed"
Jun 22 22:16:54.224: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod client-envvars-bc0b68da-9014-4120-aa65-e025b3abb117 container env3cont: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:54.314: INFO: Waiting for pod client-envvars-bc0b68da-9014-4120-aa65-e025b3abb117 to disappear
Jun 22 22:16:54.349: INFO: Pod client-envvars-bc0b68da-9014-4120-aa65-e025b3abb117 no longer exists
[AfterEach] [sig-node] Pods
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:24.821 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should contain environment variables for services [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl Port forwarding
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 43 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
With a server listening on 0.0.0.0
[90mtest/e2e/kubectl/portforward.go:454[0m
should support forwarding over websockets
[90mtest/e2e/kubectl/portforward.go:470[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":7,"skipped":39,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:54.876: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 233 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support multiple inline ephemeral volumes
[90mtest/e2e/storage/testsuites/ephemeral.go:315[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":2,"skipped":11,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:16:55.712: INFO: Only supported for providers [openstack] (not gce)
... skipping 159 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should verify that all csinodes have volume limits
[90mtest/e2e/storage/testsuites/volumelimits.go:249[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits","total":-1,"completed":3,"skipped":13,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:00.541: INFO: Only supported for providers [azure] (not gce)
... skipping 112 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":12,"skipped":105,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:00.736: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 40 lines ...
[1mSTEP[0m: Destroying namespace "services-8061" for this suite.
[AfterEach] [sig-network] Services
test/e2e/network/service.go:762
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:01.169: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 85 lines ...
Jun 22 22:16:06.441: INFO: PersistentVolumeClaim csi-hostpathj9rxg found but phase is Pending instead of Bound.
Jun 22 22:16:08.474: INFO: PersistentVolumeClaim csi-hostpathj9rxg found but phase is Pending instead of Bound.
Jun 22 22:16:10.509: INFO: PersistentVolumeClaim csi-hostpathj9rxg found but phase is Pending instead of Bound.
Jun 22 22:16:12.548: INFO: PersistentVolumeClaim csi-hostpathj9rxg found and phase=Bound (8.175665376s)
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-gwnj
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:16:12.654: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-gwnj" in namespace "provisioning-4945" to be "Succeeded or Failed"
Jun 22 22:16:12.688: INFO: Pod "pod-subpath-test-dynamicpv-gwnj": Phase="Pending", Reason="", readiness=false. Elapsed: 33.723466ms
Jun 22 22:16:14.723: INFO: Pod "pod-subpath-test-dynamicpv-gwnj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069187285s
Jun 22 22:16:16.723: INFO: Pod "pod-subpath-test-dynamicpv-gwnj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068690539s
Jun 22 22:16:18.722: INFO: Pod "pod-subpath-test-dynamicpv-gwnj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067965469s
Jun 22 22:16:20.722: INFO: Pod "pod-subpath-test-dynamicpv-gwnj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068357351s
Jun 22 22:16:22.723: INFO: Pod "pod-subpath-test-dynamicpv-gwnj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069141456s
... skipping 2 lines ...
Jun 22 22:16:28.840: INFO: Pod "pod-subpath-test-dynamicpv-gwnj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.185973463s
Jun 22 22:16:30.730: INFO: Pod "pod-subpath-test-dynamicpv-gwnj": Phase="Pending", Reason="", readiness=false. Elapsed: 18.07561305s
Jun 22 22:16:32.722: INFO: Pod "pod-subpath-test-dynamicpv-gwnj": Phase="Pending", Reason="", readiness=false. Elapsed: 20.06767155s
Jun 22 22:16:34.724: INFO: Pod "pod-subpath-test-dynamicpv-gwnj": Phase="Pending", Reason="", readiness=false. Elapsed: 22.070109646s
Jun 22 22:16:36.726: INFO: Pod "pod-subpath-test-dynamicpv-gwnj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.071888538s
[1mSTEP[0m: Saw pod success
Jun 22 22:16:36.726: INFO: Pod "pod-subpath-test-dynamicpv-gwnj" satisfied condition "Succeeded or Failed"
Jun 22 22:16:36.765: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-subpath-test-dynamicpv-gwnj container test-container-volume-dynamicpv-gwnj: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:16:36.875: INFO: Waiting for pod pod-subpath-test-dynamicpv-gwnj to disappear
Jun 22 22:16:36.909: INFO: Pod pod-subpath-test-dynamicpv-gwnj no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-gwnj
Jun 22 22:16:36.909: INFO: Deleting pod "pod-subpath-test-dynamicpv-gwnj" in namespace "provisioning-4945"
... skipping 61 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":10,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:01.816: INFO: Only supported for providers [aws] (not gce)
... skipping 149 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:17:02.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "replication-controller-8869" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:02.859: INFO: Only supported for providers [azure] (not gce)
... skipping 343 lines ...
[32m• [SLOW TEST:64.490 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
[90mtest/e2e/common/node/container_probe.go:244[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":6,"skipped":41,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 139 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and write from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:240[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":33,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:06.154: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
Jun 22 22:15:07.398: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7233
Jun 22 22:15:07.433: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7233
Jun 22 22:15:07.485: INFO: creating *v1.StatefulSet: csi-mock-volumes-7233-2759/csi-mockplugin
Jun 22 22:15:07.522: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7233
Jun 22 22:15:07.560: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7233"
Jun 22 22:15:07.597: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7233 to register on node nodes-us-east1-b-vf6p
I0622 22:15:09.937269 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0622 22:15:09.973089 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7233","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0622 22:15:10.008189 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0622 22:15:10.043717 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0622 22:15:10.128076 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7233","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0622 22:15:10.809821 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7233"},"Error":"","FullError":null}
[1mSTEP[0m: Creating pod with fsGroup
Jun 22 22:15:17.275: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 22 22:15:17.311: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-5g55x] to have phase Bound
I0622 22:15:17.322943 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a"}}},"Error":"","FullError":null}
Jun 22 22:15:17.345: INFO: PersistentVolumeClaim pvc-5g55x found but phase is Pending instead of Bound.
Jun 22 22:15:19.380: INFO: PersistentVolumeClaim pvc-5g55x found and phase=Bound (2.06818627s)
Jun 22 22:15:19.482: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-djl5b" in namespace "csi-mock-volumes-7233" to be "running"
Jun 22 22:15:19.517: INFO: Pod "pvc-volume-tester-djl5b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.920724ms
I0622 22:15:20.887971 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:15:20.922609 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:15:20.957815 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 22 22:15:20.992: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:15:20.993: INFO: ExecWithOptions: Clientset creation
Jun 22 22:15:20.993: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-7233-2759/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-7233%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-7233%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
I0622 22:15:21.261297 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-7233/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a","storage.kubernetes.io/csiProvisionerIdentity":"1655936110061-8081-csi-mock-csi-mock-volumes-7233"}},"Response":{},"Error":"","FullError":null}
I0622 22:15:21.296463 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:15:21.331851 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:15:21.366950 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 22 22:15:21.405: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:15:21.406: INFO: ExecWithOptions: Clientset creation
Jun 22 22:15:21.406: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-7233-2759/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F55db1f39-40ed-4280-9bfd-d1054fd9b0f3%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F55db1f39-40ed-4280-9bfd-d1054fd9b0f3%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 22 22:15:21.552: INFO: Pod "pvc-volume-tester-djl5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069959908s
Jun 22 22:15:21.675: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:15:21.676: INFO: ExecWithOptions: Clientset creation
Jun 22 22:15:21.676: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-7233-2759/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F55db1f39-40ed-4280-9bfd-d1054fd9b0f3%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F55db1f39-40ed-4280-9bfd-d1054fd9b0f3%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 22 22:15:21.936: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:15:21.937: INFO: ExecWithOptions: Clientset creation
Jun 22 22:15:21.937: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-7233-2759/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F55db1f39-40ed-4280-9bfd-d1054fd9b0f3%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0622 22:15:22.210096 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-7233/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","target_path":"/var/lib/kubelet/pods/55db1f39-40ed-4280-9bfd-d1054fd9b0f3/volumes/kubernetes.io~csi/pvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a","storage.kubernetes.io/csiProvisionerIdentity":"1655936110061-8081-csi-mock-csi-mock-volumes-7233"}},"Response":{},"Error":"","FullError":null}
Jun 22 22:15:23.552: INFO: Pod "pvc-volume-tester-djl5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06978162s
Jun 22 22:15:25.557: INFO: Pod "pvc-volume-tester-djl5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074636858s
Jun 22 22:15:27.552: INFO: Pod "pvc-volume-tester-djl5b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070135905s
Jun 22 22:15:29.552: INFO: Pod "pvc-volume-tester-djl5b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069598421s
Jun 22 22:15:31.553: INFO: Pod "pvc-volume-tester-djl5b": Phase="Running", Reason="", readiness=true. Elapsed: 12.070764806s
Jun 22 22:15:31.553: INFO: Pod "pvc-volume-tester-djl5b" satisfied condition "running"
[1mSTEP[0m: Deleting pod pvc-volume-tester-djl5b
Jun 22 22:15:31.553: INFO: Deleting pod "pvc-volume-tester-djl5b" in namespace "csi-mock-volumes-7233"
Jun 22 22:15:31.596: INFO: Wait up to 5m0s for pod "pvc-volume-tester-djl5b" to be fully deleted
I0622 22:15:39.656220 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:15:39.690709 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/55db1f39-40ed-4280-9bfd-d1054fd9b0f3/volumes/kubernetes.io~csi/pvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null}
Jun 22 22:16:04.039: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:16:04.040: INFO: ExecWithOptions: Clientset creation
Jun 22 22:16:04.040: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-7233-2759/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F55db1f39-40ed-4280-9bfd-d1054fd9b0f3%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0622 22:16:04.303206 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/55db1f39-40ed-4280-9bfd-d1054fd9b0f3/volumes/kubernetes.io~csi/pvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a/mount"},"Response":{},"Error":"","FullError":null}
I0622 22:16:04.427016 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:16:04.467867 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-7233/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount"},"Response":{},"Error":"","FullError":null}
[1mSTEP[0m: Deleting claim pvc-5g55x
Jun 22 22:16:05.748: INFO: Waiting up to 2m0s for PersistentVolume pvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a to get deleted
Jun 22 22:16:05.788: INFO: PersistentVolume pvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a found and phase=Bound (39.531973ms)
I0622 22:16:05.789685 7112 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
Jun 22 22:16:07.825: INFO: PersistentVolume pvc-9735beb4-8f69-4e0d-ba37-e03af59bca7a was removed
[1mSTEP[0m: Deleting storageclass csi-mock-volumes-7233-sc9hsjt
[1mSTEP[0m: Cleaning up resources
[1mSTEP[0m: deleting the test namespace: csi-mock-volumes-7233
[1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-7233] to vanish
[1mSTEP[0m: uninstalling csi mock driver
... skipping 39 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Delegate FSGroup to CSI driver [LinuxOnly]
[90mtest/e2e/storage/csi_mock_volume.go:1719[0m
should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
[90mtest/e2e/storage/csi_mock_volume.go:1735[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP","total":-1,"completed":3,"skipped":8,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 12 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:17:06.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-3278" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":4,"skipped":9,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Subpath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 5 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with downward pod [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-downwardapi-jmnh
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 22:16:39.016: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jmnh" in namespace "subpath-164" to be "Succeeded or Failed"
Jun 22 22:16:39.052: INFO: Pod "pod-subpath-test-downwardapi-jmnh": Phase="Pending", Reason="", readiness=false. Elapsed: 35.734522ms
Jun 22 22:16:41.091: INFO: Pod "pod-subpath-test-downwardapi-jmnh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074793007s
Jun 22 22:16:43.088: INFO: Pod "pod-subpath-test-downwardapi-jmnh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071708203s
Jun 22 22:16:45.090: INFO: Pod "pod-subpath-test-downwardapi-jmnh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073620967s
Jun 22 22:16:47.091: INFO: Pod "pod-subpath-test-downwardapi-jmnh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074373545s
Jun 22 22:16:49.087: INFO: Pod "pod-subpath-test-downwardapi-jmnh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.070436276s
... skipping 4 lines ...
Jun 22 22:16:59.088: INFO: Pod "pod-subpath-test-downwardapi-jmnh": Phase="Running", Reason="", readiness=true. Elapsed: 20.071650902s
Jun 22 22:17:01.087: INFO: Pod "pod-subpath-test-downwardapi-jmnh": Phase="Running", Reason="", readiness=true. Elapsed: 22.070422961s
Jun 22 22:17:03.096: INFO: Pod "pod-subpath-test-downwardapi-jmnh": Phase="Running", Reason="", readiness=true. Elapsed: 24.079462432s
Jun 22 22:17:05.089: INFO: Pod "pod-subpath-test-downwardapi-jmnh": Phase="Running", Reason="", readiness=true. Elapsed: 26.07250419s
Jun 22 22:17:07.093: INFO: Pod "pod-subpath-test-downwardapi-jmnh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.075943527s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:07.093: INFO: Pod "pod-subpath-test-downwardapi-jmnh" satisfied condition "Succeeded or Failed"
Jun 22 22:17:07.137: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-downwardapi-jmnh container test-container-subpath-downwardapi-jmnh: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:07.221: INFO: Waiting for pod pod-subpath-test-downwardapi-jmnh to disappear
Jun 22 22:17:07.255: INFO: Pod pod-subpath-test-downwardapi-jmnh no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-downwardapi-jmnh
Jun 22 22:17:07.255: INFO: Deleting pod "pod-subpath-test-downwardapi-jmnh" in namespace "subpath-164"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with downward pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]","total":-1,"completed":6,"skipped":26,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:07.393: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
... skipping 70 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
test/e2e/common/storage/empty_dir.go:63
[1mSTEP[0m: Creating a pod to test emptydir subpath on tmpfs
Jun 22 22:16:56.050: INFO: Waiting up to 5m0s for pod "pod-7fe23a00-5263-441c-a14d-e7d30a970171" in namespace "emptydir-9200" to be "Succeeded or Failed"
Jun 22 22:16:56.089: INFO: Pod "pod-7fe23a00-5263-441c-a14d-e7d30a970171": Phase="Pending", Reason="", readiness=false. Elapsed: 39.196531ms
Jun 22 22:16:58.124: INFO: Pod "pod-7fe23a00-5263-441c-a14d-e7d30a970171": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074034461s
Jun 22 22:17:00.140: INFO: Pod "pod-7fe23a00-5263-441c-a14d-e7d30a970171": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089657797s
Jun 22 22:17:02.124: INFO: Pod "pod-7fe23a00-5263-441c-a14d-e7d30a970171": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074238992s
Jun 22 22:17:04.126: INFO: Pod "pod-7fe23a00-5263-441c-a14d-e7d30a970171": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076355817s
Jun 22 22:17:06.124: INFO: Pod "pod-7fe23a00-5263-441c-a14d-e7d30a970171": Phase="Pending", Reason="", readiness=false. Elapsed: 10.074089625s
Jun 22 22:17:08.123: INFO: Pod "pod-7fe23a00-5263-441c-a14d-e7d30a970171": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.073049111s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:08.123: INFO: Pod "pod-7fe23a00-5263-441c-a14d-e7d30a970171" satisfied condition "Succeeded or Failed"
Jun 22 22:17:08.158: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-7fe23a00-5263-441c-a14d-e7d30a970171 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:08.247: INFO: Waiting for pod pod-7fe23a00-5263-441c-a14d-e7d30a970171 to disappear
Jun 22 22:17:08.281: INFO: Pod pod-7fe23a00-5263-441c-a14d-e7d30a970171 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 6 lines ...
[90mtest/e2e/common/storage/framework.go:23[0m
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/empty_dir.go:48[0m
nonexistent volume subPath should have the correct mode and owner using FSGroup
[90mtest/e2e/common/storage/empty_dir.go:63[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":3,"skipped":24,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-instrumentation] MetricsGrabber
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 16 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:17:08.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "metrics-grabber-577" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":7,"skipped":35,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:08.395: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: aws]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [aws] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1722
[90m------------------------------[0m
... skipping 81 lines ...
[1mSTEP[0m: Destroying namespace "services-3197" for this suite.
[AfterEach] [sig-network] Services
test/e2e/network/service.go:762
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":2,"skipped":30,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:08.712: INFO: Only supported for providers [azure] (not gce)
... skipping 137 lines ...
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
test/e2e/node/security_context.go:163
[1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 22 22:17:01.037: INFO: Waiting up to 5m0s for pod "security-context-36981905-7640-4887-b80a-ec180989ec34" in namespace "security-context-7944" to be "Succeeded or Failed"
Jun 22 22:17:01.073: INFO: Pod "security-context-36981905-7640-4887-b80a-ec180989ec34": Phase="Pending", Reason="", readiness=false. Elapsed: 35.365365ms
Jun 22 22:17:03.110: INFO: Pod "security-context-36981905-7640-4887-b80a-ec180989ec34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073153436s
Jun 22 22:17:05.108: INFO: Pod "security-context-36981905-7640-4887-b80a-ec180989ec34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070882033s
Jun 22 22:17:07.112: INFO: Pod "security-context-36981905-7640-4887-b80a-ec180989ec34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07465236s
Jun 22 22:17:09.109: INFO: Pod "security-context-36981905-7640-4887-b80a-ec180989ec34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071812591s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:09.109: INFO: Pod "security-context-36981905-7640-4887-b80a-ec180989ec34" satisfied condition "Succeeded or Failed"
Jun 22 22:17:09.149: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod security-context-36981905-7640-4887-b80a-ec180989ec34 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:09.278: INFO: Waiting for pod security-context-36981905-7640-4887-b80a-ec180989ec34 to disappear
Jun 22 22:17:09.314: INFO: Pod security-context-36981905-7640-4887-b80a-ec180989ec34 no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.651 seconds][0m
[sig-node] Security Context
[90mtest/e2e/node/framework.go:23[0m
should support seccomp unconfined on the container [LinuxOnly]
[90mtest/e2e/node/security_context.go:163[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":13,"skipped":111,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Networking
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 472 lines ...
Jun 22 22:16:55.320: INFO: PersistentVolumeClaim pvc-4d67z found but phase is Pending instead of Bound.
Jun 22 22:16:57.355: INFO: PersistentVolumeClaim pvc-4d67z found and phase=Bound (14.311934694s)
Jun 22 22:16:57.355: INFO: Waiting up to 3m0s for PersistentVolume local-r68rx to have phase Bound
Jun 22 22:16:57.388: INFO: PersistentVolume local-r68rx found and phase=Bound (33.327273ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-4xzg
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:16:57.494: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4xzg" in namespace "provisioning-3592" to be "Succeeded or Failed"
Jun 22 22:16:57.530: INFO: Pod "pod-subpath-test-preprovisionedpv-4xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 35.833099ms
Jun 22 22:16:59.565: INFO: Pod "pod-subpath-test-preprovisionedpv-4xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071071854s
Jun 22 22:17:01.565: INFO: Pod "pod-subpath-test-preprovisionedpv-4xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07088042s
Jun 22 22:17:03.567: INFO: Pod "pod-subpath-test-preprovisionedpv-4xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073085403s
Jun 22 22:17:05.576: INFO: Pod "pod-subpath-test-preprovisionedpv-4xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082199112s
Jun 22 22:17:07.566: INFO: Pod "pod-subpath-test-preprovisionedpv-4xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07192797s
Jun 22 22:17:09.566: INFO: Pod "pod-subpath-test-preprovisionedpv-4xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.071638485s
Jun 22 22:17:11.567: INFO: Pod "pod-subpath-test-preprovisionedpv-4xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.073446732s
Jun 22 22:17:13.573: INFO: Pod "pod-subpath-test-preprovisionedpv-4xzg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.07857806s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:13.573: INFO: Pod "pod-subpath-test-preprovisionedpv-4xzg" satisfied condition "Succeeded or Failed"
Jun 22 22:17:13.611: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-preprovisionedpv-4xzg container test-container-subpath-preprovisionedpv-4xzg: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:13.694: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4xzg to disappear
Jun 22 22:17:13.733: INFO: Pod pod-subpath-test-preprovisionedpv-4xzg no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-4xzg
Jun 22 22:17:13.733: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4xzg" in namespace "provisioning-3592"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":35,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:14.305: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 125 lines ...
Jun 22 22:17:07.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f8b6c9658\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 22 22:17:09.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f8b6c9658\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 22 22:17:11.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 22, 22, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f8b6c9658\" is progressing."}}, CollisionCount:(*int32)(nil)}
[1mSTEP[0m: Deploying the webhook service
[1mSTEP[0m: Verifying the service has paired with the endpoint
Jun 22 22:17:14.244: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
[1mSTEP[0m: create a namespace for the webhook
[1mSTEP[0m: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:187
Jun 22 22:17:14.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "webhook-2605" for this suite.
... skipping 2 lines ...
test/e2e/apimachinery/webhook.go:104
[32m• [SLOW TEST:12.910 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should unconditionally reject operations on fail closed webhook [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":10,"skipped":103,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:14.856: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 53 lines ...
[32m• [SLOW TEST:13.164 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should update annotations on modification [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:15.044: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 185 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":29,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:17.741: INFO: Only supported for providers [openstack] (not gce)
... skipping 47 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs
Jun 22 22:17:06.121: INFO: Waiting up to 5m0s for pod "pod-f7645671-dee7-4278-834a-118370c24f06" in namespace "emptydir-3866" to be "Succeeded or Failed"
Jun 22 22:17:06.155: INFO: Pod "pod-f7645671-dee7-4278-834a-118370c24f06": Phase="Pending", Reason="", readiness=false. Elapsed: 33.501529ms
Jun 22 22:17:08.190: INFO: Pod "pod-f7645671-dee7-4278-834a-118370c24f06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06857739s
Jun 22 22:17:10.192: INFO: Pod "pod-f7645671-dee7-4278-834a-118370c24f06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071019284s
Jun 22 22:17:12.194: INFO: Pod "pod-f7645671-dee7-4278-834a-118370c24f06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07239865s
Jun 22 22:17:14.195: INFO: Pod "pod-f7645671-dee7-4278-834a-118370c24f06": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073493907s
Jun 22 22:17:16.195: INFO: Pod "pod-f7645671-dee7-4278-834a-118370c24f06": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073592934s
Jun 22 22:17:18.190: INFO: Pod "pod-f7645671-dee7-4278-834a-118370c24f06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.069140784s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:18.190: INFO: Pod "pod-f7645671-dee7-4278-834a-118370c24f06" satisfied condition "Succeeded or Failed"
Jun 22 22:17:18.224: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-f7645671-dee7-4278-834a-118370c24f06 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:18.301: INFO: Waiting for pod pod-f7645671-dee7-4278-834a-118370c24f06 to disappear
Jun 22 22:17:18.335: INFO: Pod pod-f7645671-dee7-4278-834a-118370c24f06 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:12.572 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":44,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:18.447: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 192 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI attach test using mock driver
[90mtest/e2e/storage/csi_mock_volume.go:332[0m
should require VolumeAttach for ephemermal volume and drivers with attachment
[90mtest/e2e/storage/csi_mock_volume.go:360[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment","total":-1,"completed":3,"skipped":5,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:24.732: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 194 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
[90mtest/e2e/storage/framework/testsuite.go:50[0m
Verify if offline PVC expansion works
[90mtest/e2e/storage/testsuites/volume_expand.go:176[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":5,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:24.860: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 69 lines ...
[32m• [SLOW TEST:30.969 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should be able to create a functioning NodePort service [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:25.930: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 83 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":41,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:17:16.731: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename downward-api
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide podname only [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 22 22:17:17.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1" in namespace "downward-api-6843" to be "Succeeded or Failed"
Jun 22 22:17:17.060: INFO: Pod "downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.372508ms
Jun 22 22:17:19.097: INFO: Pod "downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074601862s
Jun 22 22:17:21.097: INFO: Pod "downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075330534s
Jun 22 22:17:23.104: INFO: Pod "downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082182618s
Jun 22 22:17:25.096: INFO: Pod "downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074571921s
Jun 22 22:17:27.095: INFO: Pod "downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073561178s
Jun 22 22:17:29.095: INFO: Pod "downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.073050447s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:29.095: INFO: Pod "downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1" satisfied condition "Succeeded or Failed"
Jun 22 22:17:29.135: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:29.216: INFO: Waiting for pod downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1 to disappear
Jun 22 22:17:29.251: INFO: Pod downwardapi-volume-14f5fa7f-7134-468d-a50c-3a560a9c03e1 no longer exists
[AfterEach] [sig-storage] Downward API volume
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:12.594 seconds][0m
[sig-storage] Downward API volume
[90mtest/e2e/common/storage/framework.go:23[0m
should provide podname only [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":41,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:29.350: INFO: Only supported for providers [azure] (not gce)
... skipping 133 lines ...
[32m• [SLOW TEST:15.333 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should support configurable pod resolv.conf
[90mtest/e2e/network/dns.go:460[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":4,"skipped":24,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl Port forwarding
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 35 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
With a server listening on localhost
[90mtest/e2e/kubectl/portforward.go:476[0m
should support forwarding over websockets
[90mtest/e2e/kubectl/portforward.go:492[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":8,"skipped":49,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:31.106: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 72 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Probing container
test/e2e/common/node/container_probe.go:59
[It] should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
test/e2e/common/node/container_probe.go:558
Jun 22 22:17:03.293: INFO: Waiting up to 5m0s for all pods (need at least 1) in namespace 'container-probe-9236' to be running and ready
Jun 22 22:17:03.401: INFO: The status of Pod probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 22 22:17:03.401: INFO: 0 / 1 pods in namespace 'container-probe-9236' are running and ready (0 seconds elapsed)
Jun 22 22:17:03.401: INFO: expected 0 pod replicas in namespace 'container-probe-9236', 0 are Running and Ready.
Jun 22 22:17:03.401: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 22 22:17:03.401: INFO: probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 nodes-us-east1-b-3xs4 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC }]
Jun 22 22:17:03.401: INFO:
Jun 22 22:17:05.510: INFO: The status of Pod probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 22 22:17:05.510: INFO: 0 / 1 pods in namespace 'container-probe-9236' are running and ready (2 seconds elapsed)
Jun 22 22:17:05.510: INFO: expected 0 pod replicas in namespace 'container-probe-9236', 0 are Running and Ready.
Jun 22 22:17:05.510: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 22 22:17:05.510: INFO: probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 nodes-us-east1-b-3xs4 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC }]
Jun 22 22:17:05.510: INFO:
Jun 22 22:17:07.509: INFO: The status of Pod probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 22 22:17:07.509: INFO: 0 / 1 pods in namespace 'container-probe-9236' are running and ready (4 seconds elapsed)
Jun 22 22:17:07.509: INFO: expected 0 pod replicas in namespace 'container-probe-9236', 0 are Running and Ready.
Jun 22 22:17:07.509: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 22 22:17:07.509: INFO: probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 nodes-us-east1-b-3xs4 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC }]
Jun 22 22:17:07.509: INFO:
Jun 22 22:17:09.517: INFO: The status of Pod probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 22 22:17:09.517: INFO: 0 / 1 pods in namespace 'container-probe-9236' are running and ready (6 seconds elapsed)
Jun 22 22:17:09.517: INFO: expected 0 pod replicas in namespace 'container-probe-9236', 0 are Running and Ready.
Jun 22 22:17:09.517: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 22 22:17:09.517: INFO: probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 nodes-us-east1-b-3xs4 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC }]
Jun 22 22:17:09.517: INFO:
Jun 22 22:17:11.506: INFO: The status of Pod probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 22 22:17:11.506: INFO: 0 / 1 pods in namespace 'container-probe-9236' are running and ready (8 seconds elapsed)
Jun 22 22:17:11.506: INFO: expected 0 pod replicas in namespace 'container-probe-9236', 0 are Running and Ready.
Jun 22 22:17:11.506: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 22 22:17:11.506: INFO: probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 nodes-us-east1-b-3xs4 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC }]
Jun 22 22:17:11.506: INFO:
Jun 22 22:17:13.507: INFO: The status of Pod probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 22 22:17:13.507: INFO: 0 / 1 pods in namespace 'container-probe-9236' are running and ready (10 seconds elapsed)
Jun 22 22:17:13.507: INFO: expected 0 pod replicas in namespace 'container-probe-9236', 0 are Running and Ready.
Jun 22 22:17:13.507: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 22 22:17:13.507: INFO: probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 nodes-us-east1-b-3xs4 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC }]
Jun 22 22:17:13.507: INFO:
Jun 22 22:17:15.507: INFO: The status of Pod probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 22 22:17:15.507: INFO: 0 / 1 pods in namespace 'container-probe-9236' are running and ready (12 seconds elapsed)
Jun 22 22:17:15.507: INFO: expected 0 pod replicas in namespace 'container-probe-9236', 0 are Running and Ready.
Jun 22 22:17:15.507: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 22 22:17:15.507: INFO: probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 nodes-us-east1-b-3xs4 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC }]
Jun 22 22:17:15.507: INFO:
Jun 22 22:17:17.509: INFO: The status of Pod probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 22 22:17:17.509: INFO: 0 / 1 pods in namespace 'container-probe-9236' are running and ready (14 seconds elapsed)
Jun 22 22:17:17.509: INFO: expected 0 pod replicas in namespace 'container-probe-9236', 0 are Running and Ready.
Jun 22 22:17:17.509: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 22 22:17:17.509: INFO: probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 nodes-us-east1-b-3xs4 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC }]
Jun 22 22:17:17.509: INFO:
Jun 22 22:17:19.512: INFO: The status of Pod probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 22 22:17:19.512: INFO: 0 / 1 pods in namespace 'container-probe-9236' are running and ready (16 seconds elapsed)
Jun 22 22:17:19.512: INFO: expected 0 pod replicas in namespace 'container-probe-9236', 0 are Running and Ready.
Jun 22 22:17:19.512: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 22 22:17:19.512: INFO: probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 nodes-us-east1-b-3xs4 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC }]
Jun 22 22:17:19.512: INFO:
Jun 22 22:17:21.509: INFO: The status of Pod probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 22 22:17:21.509: INFO: 0 / 1 pods in namespace 'container-probe-9236' are running and ready (18 seconds elapsed)
Jun 22 22:17:21.509: INFO: expected 0 pod replicas in namespace 'container-probe-9236', 0 are Running and Ready.
Jun 22 22:17:21.509: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 22 22:17:21.509: INFO: probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43 nodes-us-east1-b-3xs4 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC ContainersNotReady containers with unready status: [probe-test-d7fc98c9-319e-479f-9084-77656b9b5a43]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:03 +0000 UTC }]
Jun 22 22:17:21.509: INFO:
Jun 22 22:17:23.524: INFO: 1 / 1 pods in namespace 'container-probe-9236' are running and ready (20 seconds elapsed)
... skipping 7 lines ...
[32m• [SLOW TEST:30.712 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
[90mtest/e2e/common/node/container_probe.go:558[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe","total":-1,"completed":6,"skipped":63,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 197 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (block volmode)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":8,"skipped":81,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:35.904: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 141 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":58,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:37.495: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 192 lines ...
[36mDriver local doesn't support InlineVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:17:03.228: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 108 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume at the same time
[90mtest/e2e/storage/persistent_volumes-local.go:250[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:251[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":20,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:38.732: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: hostPath]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver hostPath doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 86 lines ...
[90mtest/e2e/storage/testsuites/volume_expand.go:176[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:17:11.870: INFO: >>> kubeConfig: /root/.kube/config
... skipping 85 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":17,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:39.106: INFO: Only supported for providers [vsphere] (not gce)
... skipping 316 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when create a pod with lifecycle hook
[90mtest/e2e/common/node/lifecycle_hook.go:46[0m
should execute prestop http hook properly [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":30,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:39.722: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 76 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:17:40.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "request-timeout-689" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":7,"skipped":40,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 34 lines ...
Jun 22 22:17:09.271: INFO: PersistentVolumeClaim pvc-5sz5k found but phase is Pending instead of Bound.
Jun 22 22:17:11.305: INFO: PersistentVolumeClaim pvc-5sz5k found and phase=Bound (14.297558111s)
Jun 22 22:17:11.305: INFO: Waiting up to 3m0s for PersistentVolume local-9ws9b to have phase Bound
Jun 22 22:17:11.345: INFO: PersistentVolume local-9ws9b found and phase=Bound (40.561361ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-lkhd
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 22:17:11.458: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lkhd" in namespace "provisioning-4957" to be "Succeeded or Failed"
Jun 22 22:17:11.493: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.795146ms
Jun 22 22:17:13.534: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075556638s
Jun 22 22:17:15.533: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074396432s
Jun 22 22:17:17.534: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075958134s
Jun 22 22:17:19.538: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079816824s
Jun 22 22:17:21.534: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd": Phase="Running", Reason="", readiness=true. Elapsed: 10.075865091s
... skipping 4 lines ...
Jun 22 22:17:31.540: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd": Phase="Running", Reason="", readiness=true. Elapsed: 20.081801903s
Jun 22 22:17:33.531: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd": Phase="Running", Reason="", readiness=true. Elapsed: 22.072666949s
Jun 22 22:17:35.533: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd": Phase="Running", Reason="", readiness=true. Elapsed: 24.074328822s
Jun 22 22:17:37.532: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd": Phase="Running", Reason="", readiness=true. Elapsed: 26.073693431s
Jun 22 22:17:39.533: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.074204644s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:39.533: INFO: Pod "pod-subpath-test-preprovisionedpv-lkhd" satisfied condition "Succeeded or Failed"
Jun 22 22:17:39.569: INFO: Trying to get logs from node nodes-us-east1-b-vf6p pod pod-subpath-test-preprovisionedpv-lkhd container test-container-subpath-preprovisionedpv-lkhd: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:39.646: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lkhd to disappear
Jun 22 22:17:39.681: INFO: Pod pod-subpath-test-preprovisionedpv-lkhd no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-lkhd
Jun 22 22:17:39.681: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lkhd" in namespace "provisioning-4957"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":46,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 29 lines ...
Jun 22 22:17:25.281: INFO: PersistentVolumeClaim pvc-rqjrh found but phase is Pending instead of Bound.
Jun 22 22:17:27.316: INFO: PersistentVolumeClaim pvc-rqjrh found and phase=Bound (6.143180567s)
Jun 22 22:17:27.316: INFO: Waiting up to 3m0s for PersistentVolume local-bx5bq to have phase Bound
Jun 22 22:17:27.354: INFO: PersistentVolume local-bx5bq found and phase=Bound (37.648717ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-9j6v
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:17:27.458: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9j6v" in namespace "provisioning-1444" to be "Succeeded or Failed"
Jun 22 22:17:27.492: INFO: Pod "pod-subpath-test-preprovisionedpv-9j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 33.597275ms
Jun 22 22:17:29.527: INFO: Pod "pod-subpath-test-preprovisionedpv-9j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068725698s
Jun 22 22:17:31.530: INFO: Pod "pod-subpath-test-preprovisionedpv-9j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071845834s
Jun 22 22:17:33.531: INFO: Pod "pod-subpath-test-preprovisionedpv-9j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072327026s
Jun 22 22:17:35.532: INFO: Pod "pod-subpath-test-preprovisionedpv-9j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073391825s
Jun 22 22:17:37.529: INFO: Pod "pod-subpath-test-preprovisionedpv-9j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.070415141s
Jun 22 22:17:39.531: INFO: Pod "pod-subpath-test-preprovisionedpv-9j6v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.072812938s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:39.531: INFO: Pod "pod-subpath-test-preprovisionedpv-9j6v" satisfied condition "Succeeded or Failed"
Jun 22 22:17:39.566: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-preprovisionedpv-9j6v container test-container-subpath-preprovisionedpv-9j6v: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:39.658: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9j6v to disappear
Jun 22 22:17:39.696: INFO: Pod pod-subpath-test-preprovisionedpv-9j6v no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-9j6v
Jun 22 22:17:39.696: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9j6v" in namespace "provisioning-1444"
... skipping 42 lines ...
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1577
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":40,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:40.283: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 40 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:17:40.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "watch-5319" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":9,"skipped":42,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:40.903: INFO: Only supported for providers [azure] (not gce)
... skipping 69 lines ...
[32m• [SLOW TEST:78.564 seconds][0m
[sig-apps] CronJob
[90mtest/e2e/apps/framework.go:23[0m
should remove from active list jobs that have been deleted
[90mtest/e2e/apps/cronjob.go:241[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":7,"skipped":73,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:41.740: INFO: Only supported for providers [aws] (not gce)
... skipping 95 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: cinder]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [openstack] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1092
[90m------------------------------[0m
... skipping 34 lines ...
Jun 22 22:17:24.053: INFO: PersistentVolumeClaim pvc-szbnr found but phase is Pending instead of Bound.
Jun 22 22:17:26.088: INFO: PersistentVolumeClaim pvc-szbnr found and phase=Bound (14.28966177s)
Jun 22 22:17:26.088: INFO: Waiting up to 3m0s for PersistentVolume local-26tff to have phase Bound
Jun 22 22:17:26.122: INFO: PersistentVolume local-26tff found and phase=Bound (33.87591ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-l9wd
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:17:26.227: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-l9wd" in namespace "provisioning-255" to be "Succeeded or Failed"
Jun 22 22:17:26.264: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.882105ms
Jun 22 22:17:28.299: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071750337s
Jun 22 22:17:30.302: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075106807s
Jun 22 22:17:32.299: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071818974s
Jun 22 22:17:34.300: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073131959s
Jun 22 22:17:36.307: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079830337s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:36.307: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd" satisfied condition "Succeeded or Failed"
Jun 22 22:17:36.341: INFO: Trying to get logs from node nodes-us-east1-b-vf6p pod pod-subpath-test-preprovisionedpv-l9wd container test-container-subpath-preprovisionedpv-l9wd: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:36.419: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-l9wd to disappear
Jun 22 22:17:36.457: INFO: Pod pod-subpath-test-preprovisionedpv-l9wd no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-l9wd
Jun 22 22:17:36.457: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-l9wd" in namespace "provisioning-255"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-l9wd
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:17:36.527: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-l9wd" in namespace "provisioning-255" to be "Succeeded or Failed"
Jun 22 22:17:36.562: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd": Phase="Pending", Reason="", readiness=false. Elapsed: 35.386079ms
Jun 22 22:17:38.597: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070465102s
Jun 22 22:17:40.598: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070670813s
Jun 22 22:17:42.597: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06957624s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:42.597: INFO: Pod "pod-subpath-test-preprovisionedpv-l9wd" satisfied condition "Succeeded or Failed"
Jun 22 22:17:42.639: INFO: Trying to get logs from node nodes-us-east1-b-vf6p pod pod-subpath-test-preprovisionedpv-l9wd container test-container-subpath-preprovisionedpv-l9wd: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:42.724: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-l9wd to disappear
Jun 22 22:17:42.758: INFO: Pod pod-subpath-test-preprovisionedpv-l9wd no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-l9wd
Jun 22 22:17:42.758: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-l9wd" in namespace "provisioning-255"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":12,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] StatefulSet
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 60 lines ...
[90mtest/e2e/apps/framework.go:23[0m
Basic StatefulSet functionality [StatefulSetBasic]
[90mtest/e2e/apps/statefulset.go:101[0m
should adopt matching orphans and release non-matching pods
[90mtest/e2e/apps/statefulset.go:171[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":4,"skipped":68,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 6 lines ...
[It] should support non-existent path
test/e2e/storage/testsuites/subpath.go:196
Jun 22 22:17:42.065: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 22:17:42.108: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-9ndc
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:17:42.148: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9ndc" in namespace "provisioning-8308" to be "Succeeded or Failed"
Jun 22 22:17:42.184: INFO: Pod "pod-subpath-test-inlinevolume-9ndc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.063333ms
Jun 22 22:17:44.224: INFO: Pod "pod-subpath-test-inlinevolume-9ndc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076265651s
Jun 22 22:17:46.222: INFO: Pod "pod-subpath-test-inlinevolume-9ndc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073743056s
Jun 22 22:17:48.221: INFO: Pod "pod-subpath-test-inlinevolume-9ndc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072754744s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:48.221: INFO: Pod "pod-subpath-test-inlinevolume-9ndc" satisfied condition "Succeeded or Failed"
Jun 22 22:17:48.256: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-inlinevolume-9ndc container test-container-volume-inlinevolume-9ndc: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:48.337: INFO: Waiting for pod pod-subpath-test-inlinevolume-9ndc to disappear
Jun 22 22:17:48.372: INFO: Pod pod-subpath-test-inlinevolume-9ndc no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-9ndc
Jun 22 22:17:48.372: INFO: Deleting pod "pod-subpath-test-inlinevolume-9ndc" in namespace "provisioning-8308"
... skipping 22 lines ...
[1mSTEP[0m: Building a namespace api object, basename var-expansion
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test substitution in volume subpath
Jun 22 22:17:40.580: INFO: Waiting up to 5m0s for pod "var-expansion-0b5c089a-0596-412e-a24f-2664a4900a39" in namespace "var-expansion-3602" to be "Succeeded or Failed"
Jun 22 22:17:40.615: INFO: Pod "var-expansion-0b5c089a-0596-412e-a24f-2664a4900a39": Phase="Pending", Reason="", readiness=false. Elapsed: 34.718712ms
Jun 22 22:17:42.651: INFO: Pod "var-expansion-0b5c089a-0596-412e-a24f-2664a4900a39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071123836s
Jun 22 22:17:44.651: INFO: Pod "var-expansion-0b5c089a-0596-412e-a24f-2664a4900a39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071131359s
Jun 22 22:17:46.651: INFO: Pod "var-expansion-0b5c089a-0596-412e-a24f-2664a4900a39": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070448707s
Jun 22 22:17:48.652: INFO: Pod "var-expansion-0b5c089a-0596-412e-a24f-2664a4900a39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071465659s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:48.652: INFO: Pod "var-expansion-0b5c089a-0596-412e-a24f-2664a4900a39" satisfied condition "Succeeded or Failed"
Jun 22 22:17:48.694: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod var-expansion-0b5c089a-0596-412e-a24f-2664a4900a39 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:48.781: INFO: Waiting for pod var-expansion-0b5c089a-0596-412e-a24f-2664a4900a39 to disappear
Jun 22 22:17:48.819: INFO: Pod var-expansion-0b5c089a-0596-412e-a24f-2664a4900a39 no longer exists
[AfterEach] [sig-node] Variable Expansion
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.599 seconds][0m
[sig-node] Variable Expansion
[90mtest/e2e/common/node/framework.go:23[0m
should allow substituting values in a volume subpath [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":7,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:48.901: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/framework/framework.go:187
... skipping 32 lines ...
[90mtest/e2e/storage/testsuites/provisioning.go:525[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":90,"failed":0}
[BeforeEach] [sig-node] RuntimeClass
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:17:48.531: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename runtimeclass
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:17:49.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "runtimeclass-2678" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":9,"skipped":90,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Subpath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 5 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with secret pod [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-secret-hlxz
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 22:17:15.250: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hlxz" in namespace "subpath-4105" to be "Succeeded or Failed"
Jun 22 22:17:15.284: INFO: Pod "pod-subpath-test-secret-hlxz": Phase="Pending", Reason="", readiness=false. Elapsed: 33.931132ms
Jun 22 22:17:17.324: INFO: Pod "pod-subpath-test-secret-hlxz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073722743s
Jun 22 22:17:19.327: INFO: Pod "pod-subpath-test-secret-hlxz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076759678s
Jun 22 22:17:21.320: INFO: Pod "pod-subpath-test-secret-hlxz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070144983s
Jun 22 22:17:23.328: INFO: Pod "pod-subpath-test-secret-hlxz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077909182s
Jun 22 22:17:25.319: INFO: Pod "pod-subpath-test-secret-hlxz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069013683s
... skipping 7 lines ...
Jun 22 22:17:41.320: INFO: Pod "pod-subpath-test-secret-hlxz": Phase="Running", Reason="", readiness=true. Elapsed: 26.069792876s
Jun 22 22:17:43.321: INFO: Pod "pod-subpath-test-secret-hlxz": Phase="Running", Reason="", readiness=true. Elapsed: 28.070797882s
Jun 22 22:17:45.323: INFO: Pod "pod-subpath-test-secret-hlxz": Phase="Running", Reason="", readiness=true. Elapsed: 30.073126393s
Jun 22 22:17:47.320: INFO: Pod "pod-subpath-test-secret-hlxz": Phase="Running", Reason="", readiness=true. Elapsed: 32.070275864s
Jun 22 22:17:49.322: INFO: Pod "pod-subpath-test-secret-hlxz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.071624005s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:49.322: INFO: Pod "pod-subpath-test-secret-hlxz" satisfied condition "Succeeded or Failed"
Jun 22 22:17:49.358: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-subpath-test-secret-hlxz container test-container-subpath-secret-hlxz: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:49.482: INFO: Waiting for pod pod-subpath-test-secret-hlxz to disappear
Jun 22 22:17:49.526: INFO: Pod pod-subpath-test-secret-hlxz no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-secret-hlxz
Jun 22 22:17:49.526: INFO: Deleting pod "pod-subpath-test-secret-hlxz" in namespace "subpath-4105"
... skipping 10 lines ...
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with secret pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]","total":-1,"completed":11,"skipped":107,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:49.675: INFO: Only supported for providers [vsphere] (not gce)
... skipping 90 lines ...
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a pod. [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":9,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:49.711: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 107 lines ...
[32m• [SLOW TEST:10.503 seconds][0m
[sig-node] Containers
[90mtest/e2e/common/node/framework.go:23[0m
should use the image defaults if command and args are blank [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":61,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 3 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:221
Jun 22 22:17:24.986: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 22:17:25.065: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3414" in namespace "provisioning-3414" to be "Succeeded or Failed"
Jun 22 22:17:25.098: INFO: Pod "hostpath-symlink-prep-provisioning-3414": Phase="Pending", Reason="", readiness=false. Elapsed: 33.371031ms
Jun 22 22:17:27.133: INFO: Pod "hostpath-symlink-prep-provisioning-3414": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068070787s
Jun 22 22:17:29.137: INFO: Pod "hostpath-symlink-prep-provisioning-3414": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072645849s
Jun 22 22:17:31.133: INFO: Pod "hostpath-symlink-prep-provisioning-3414": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068858065s
Jun 22 22:17:33.136: INFO: Pod "hostpath-symlink-prep-provisioning-3414": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07119255s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:33.136: INFO: Pod "hostpath-symlink-prep-provisioning-3414" satisfied condition "Succeeded or Failed"
Jun 22 22:17:33.136: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3414" in namespace "provisioning-3414"
Jun 22 22:17:33.187: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3414" to be fully deleted
Jun 22 22:17:33.221: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-txc6
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:17:33.259: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-txc6" in namespace "provisioning-3414" to be "Succeeded or Failed"
Jun 22 22:17:33.293: INFO: Pod "pod-subpath-test-inlinevolume-txc6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.909925ms
Jun 22 22:17:35.329: INFO: Pod "pod-subpath-test-inlinevolume-txc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069195965s
Jun 22 22:17:37.328: INFO: Pod "pod-subpath-test-inlinevolume-txc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068777128s
Jun 22 22:17:39.328: INFO: Pod "pod-subpath-test-inlinevolume-txc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068433311s
Jun 22 22:17:41.331: INFO: Pod "pod-subpath-test-inlinevolume-txc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071343414s
Jun 22 22:17:43.332: INFO: Pod "pod-subpath-test-inlinevolume-txc6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072757442s
Jun 22 22:17:45.334: INFO: Pod "pod-subpath-test-inlinevolume-txc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.074154527s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:45.334: INFO: Pod "pod-subpath-test-inlinevolume-txc6" satisfied condition "Succeeded or Failed"
Jun 22 22:17:45.377: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-subpath-test-inlinevolume-txc6 container test-container-subpath-inlinevolume-txc6: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:45.480: INFO: Waiting for pod pod-subpath-test-inlinevolume-txc6 to disappear
Jun 22 22:17:45.513: INFO: Pod pod-subpath-test-inlinevolume-txc6 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-txc6
Jun 22 22:17:45.513: INFO: Deleting pod "pod-subpath-test-inlinevolume-txc6" in namespace "provisioning-3414"
[1mSTEP[0m: Deleting pod
Jun 22 22:17:45.552: INFO: Deleting pod "pod-subpath-test-inlinevolume-txc6" in namespace "provisioning-3414"
Jun 22 22:17:45.626: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3414" in namespace "provisioning-3414" to be "Succeeded or Failed"
Jun 22 22:17:45.662: INFO: Pod "hostpath-symlink-prep-provisioning-3414": Phase="Pending", Reason="", readiness=false. Elapsed: 35.527266ms
Jun 22 22:17:47.699: INFO: Pod "hostpath-symlink-prep-provisioning-3414": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073147751s
Jun 22 22:17:49.700: INFO: Pod "hostpath-symlink-prep-provisioning-3414": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073528746s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:49.700: INFO: Pod "hostpath-symlink-prep-provisioning-3414" satisfied condition "Succeeded or Failed"
Jun 22 22:17:49.700: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3414" in namespace "provisioning-3414"
Jun 22 22:17:49.759: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3414" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
Jun 22 22:17:49.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-3414" for this suite.
... skipping 8 lines ...
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":10,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:49.894: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 113 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:17:49.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "ingressclass-573" for this suite.
[32m•[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":8,"skipped":55,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:49.955: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 128 lines ...
Jun 22 22:17:12.496: INFO: PersistentVolumeClaim csi-hostpath7lz6h found but phase is Pending instead of Bound.
Jun 22 22:17:14.532: INFO: PersistentVolumeClaim csi-hostpath7lz6h found but phase is Pending instead of Bound.
Jun 22 22:17:16.576: INFO: PersistentVolumeClaim csi-hostpath7lz6h found but phase is Pending instead of Bound.
Jun 22 22:17:18.613: INFO: PersistentVolumeClaim csi-hostpath7lz6h found and phase=Bound (6.153740219s)
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-lbqk
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:17:18.727: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-lbqk" in namespace "provisioning-3616" to be "Succeeded or Failed"
Jun 22 22:17:18.767: INFO: Pod "pod-subpath-test-dynamicpv-lbqk": Phase="Pending", Reason="", readiness=false. Elapsed: 39.691869ms
Jun 22 22:17:20.806: INFO: Pod "pod-subpath-test-dynamicpv-lbqk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078649257s
Jun 22 22:17:22.806: INFO: Pod "pod-subpath-test-dynamicpv-lbqk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078890029s
Jun 22 22:17:24.808: INFO: Pod "pod-subpath-test-dynamicpv-lbqk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08017293s
Jun 22 22:17:26.810: INFO: Pod "pod-subpath-test-dynamicpv-lbqk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082351398s
Jun 22 22:17:28.808: INFO: Pod "pod-subpath-test-dynamicpv-lbqk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.080810487s
Jun 22 22:17:30.808: INFO: Pod "pod-subpath-test-dynamicpv-lbqk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.080296382s
Jun 22 22:17:32.808: INFO: Pod "pod-subpath-test-dynamicpv-lbqk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.080049815s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:32.808: INFO: Pod "pod-subpath-test-dynamicpv-lbqk" satisfied condition "Succeeded or Failed"
Jun 22 22:17:32.843: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod pod-subpath-test-dynamicpv-lbqk container test-container-volume-dynamicpv-lbqk: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:32.940: INFO: Waiting for pod pod-subpath-test-dynamicpv-lbqk to disappear
Jun 22 22:17:32.974: INFO: Pod pod-subpath-test-dynamicpv-lbqk no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-lbqk
Jun 22 22:17:32.975: INFO: Deleting pod "pod-subpath-test-dynamicpv-lbqk" in namespace "provisioning-3616"
... skipping 61 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":63,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Subpath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 5 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-configmap-kv8w
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 22:17:26.386: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kv8w" in namespace "subpath-5268" to be "Succeeded or Failed"
Jun 22 22:17:26.421: INFO: Pod "pod-subpath-test-configmap-kv8w": Phase="Pending", Reason="", readiness=false. Elapsed: 34.942683ms
Jun 22 22:17:28.459: INFO: Pod "pod-subpath-test-configmap-kv8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072266175s
Jun 22 22:17:30.458: INFO: Pod "pod-subpath-test-configmap-kv8w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071265527s
Jun 22 22:17:32.459: INFO: Pod "pod-subpath-test-configmap-kv8w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072234264s
Jun 22 22:17:34.458: INFO: Pod "pod-subpath-test-configmap-kv8w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071912112s
Jun 22 22:17:36.458: INFO: Pod "pod-subpath-test-configmap-kv8w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.071139278s
... skipping 3 lines ...
Jun 22 22:17:44.458: INFO: Pod "pod-subpath-test-configmap-kv8w": Phase="Running", Reason="", readiness=true. Elapsed: 18.072006398s
Jun 22 22:17:46.457: INFO: Pod "pod-subpath-test-configmap-kv8w": Phase="Running", Reason="", readiness=true. Elapsed: 20.070955294s
Jun 22 22:17:48.458: INFO: Pod "pod-subpath-test-configmap-kv8w": Phase="Running", Reason="", readiness=true. Elapsed: 22.071807229s
Jun 22 22:17:50.457: INFO: Pod "pod-subpath-test-configmap-kv8w": Phase="Running", Reason="", readiness=true. Elapsed: 24.070803154s
Jun 22 22:17:52.459: INFO: Pod "pod-subpath-test-configmap-kv8w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.072108995s
[1mSTEP[0m: Saw pod success
Jun 22 22:17:52.459: INFO: Pod "pod-subpath-test-configmap-kv8w" satisfied condition "Succeeded or Failed"
Jun 22 22:17:52.495: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-configmap-kv8w container test-container-subpath-configmap-kv8w: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:17:52.580: INFO: Waiting for pod pod-subpath-test-configmap-kv8w to disappear
Jun 22 22:17:52.617: INFO: Pod pod-subpath-test-configmap-kv8w no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-configmap-kv8w
Jun 22 22:17:52.617: INFO: Deleting pod "pod-subpath-test-configmap-kv8w" in namespace "subpath-5268"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with configmap pod with mountPath of existing file [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]","total":-1,"completed":9,"skipped":72,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:52.746: INFO: Only supported for providers [vsphere] (not gce)
... skipping 52 lines ...
[32m• [SLOW TEST:21.561 seconds][0m
[sig-apps] ReplicationController
[90mtest/e2e/apps/framework.go:23[0m
should serve a basic image on each replica with a private image
[90mtest/e2e/apps/rc.go:70[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a private image","total":-1,"completed":9,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:52.763: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
test/e2e/framework/framework.go:187
... skipping 41 lines ...
[1mSTEP[0m: Destroying namespace "services-4957" for this suite.
[AfterEach] [sig-network] Services
test/e2e/network/service.go:762
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":10,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:53.564: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 159 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: azure-file]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:2079
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":3,"skipped":57,"failed":0}
[BeforeEach] [sig-node] Pods
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:17:23.863: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename pods
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
[32m• [SLOW TEST:30.734 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should support pod readiness gates [NodeConformance]
[90mtest/e2e/common/node/pods.go:768[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeConformance]","total":-1,"completed":4,"skipped":57,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] DisruptionController
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 21 lines ...
[32m• [SLOW TEST:12.736 seconds][0m
[sig-apps] DisruptionController
[90mtest/e2e/apps/framework.go:23[0m
evictions: enough pods, absolute => should allow an eviction
[90mtest/e2e/apps/disruption.go:289[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":6,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:56.117: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/framework/framework.go:187
... skipping 31 lines ...
[1mSTEP[0m: Destroying namespace "services-2070" for this suite.
[AfterEach] [sig-network] Services
test/e2e/network/service.go:762
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":7,"skipped":18,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:56.524: INFO: Driver "local" does not provide raw block - skipping
... skipping 26 lines ...
[sig-storage] CSI Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: csi-hostpath]
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver "csi-hostpath" does not support topology - skipping[0m
test/e2e/storage/testsuites/topology.go:93
[90m------------------------------[0m
... skipping 15 lines ...
[90mtest/e2e/storage/testsuites/volumes.go:198[0m
[36mOnly supported for providers [vsphere] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1439
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0}
[BeforeEach] [sig-node] Probing container
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:16:48.894: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename container-probe
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
[32m• [SLOW TEST:67.666 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
should be restarted with a failing exec liveness probe that took longer than the timeout
[90mtest/e2e/common/node/container_probe.go:261[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":8,"skipped":34,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:56.579: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 72 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:17:57.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "secrets-9288" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":8,"skipped":26,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:57.204: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 223 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and write from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:240[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":10,"skipped":56,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-scheduling] LimitRange
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 37 lines ...
[32m• [SLOW TEST:8.083 seconds][0m
[sig-scheduling] LimitRange
[90mtest/e2e/scheduling/framework.go:40[0m
should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":9,"skipped":68,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:58.105: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 210 lines ...
[32m• [SLOW TEST:25.428 seconds][0m
[sig-node] KubeletManagedEtcHosts
[90mtest/e2e/common/node/framework.go:23[0m
should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:17:59.170: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 68 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: creating secret secrets-9573/secret-test-600b30ff-8dd0-4cc8-a78b-a6d5c3c6253f
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 22:17:50.154: INFO: Waiting up to 5m0s for pod "pod-configmaps-24aa41cc-da9f-4f57-86d9-bb13675f4fa1" in namespace "secrets-9573" to be "Succeeded or Failed"
Jun 22 22:17:50.215: INFO: Pod "pod-configmaps-24aa41cc-da9f-4f57-86d9-bb13675f4fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 61.798401ms
Jun 22 22:17:52.253: INFO: Pod "pod-configmaps-24aa41cc-da9f-4f57-86d9-bb13675f4fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098909795s
Jun 22 22:17:54.253: INFO: Pod "pod-configmaps-24aa41cc-da9f-4f57-86d9-bb13675f4fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098836613s
Jun 22 22:17:56.252: INFO: Pod "pod-configmaps-24aa41cc-da9f-4f57-86d9-bb13675f4fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098595049s
Jun 22 22:17:58.252: INFO: Pod "pod-configmaps-24aa41cc-da9f-4f57-86d9-bb13675f4fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097975539s
Jun 22 22:18:00.255: INFO: Pod "pod-configmaps-24aa41cc-da9f-4f57-86d9-bb13675f4fa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101089933s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:00.255: INFO: Pod "pod-configmaps-24aa41cc-da9f-4f57-86d9-bb13675f4fa1" satisfied condition "Succeeded or Failed"
Jun 22 22:18:00.299: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod pod-configmaps-24aa41cc-da9f-4f57-86d9-bb13675f4fa1 container env-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:00.398: INFO: Waiting for pod pod-configmaps-24aa41cc-da9f-4f57-86d9-bb13675f4fa1 to disappear
Jun 22 22:18:00.435: INFO: Pod pod-configmaps-24aa41cc-da9f-4f57-86d9-bb13675f4fa1 no longer exists
[AfterEach] [sig-node] Secrets
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.745 seconds][0m
[sig-node] Secrets
[90mtest/e2e/common/node/framework.go:23[0m
should be consumable via the environment [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":96,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:00.539: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 24 lines ...
[1mSTEP[0m: Building a namespace api object, basename downward-api
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
test/e2e/common/node/downwardapi.go:110
[1mSTEP[0m: Creating a pod to test downward api env vars
Jun 22 22:17:50.275: INFO: Waiting up to 5m0s for pod "downward-api-8acba155-86bc-41e4-ac4f-534e9e91bf43" in namespace "downward-api-5832" to be "Succeeded or Failed"
Jun 22 22:17:50.319: INFO: Pod "downward-api-8acba155-86bc-41e4-ac4f-534e9e91bf43": Phase="Pending", Reason="", readiness=false. Elapsed: 43.430936ms
Jun 22 22:17:52.354: INFO: Pod "downward-api-8acba155-86bc-41e4-ac4f-534e9e91bf43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079188672s
Jun 22 22:17:54.354: INFO: Pod "downward-api-8acba155-86bc-41e4-ac4f-534e9e91bf43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079001278s
Jun 22 22:17:56.355: INFO: Pod "downward-api-8acba155-86bc-41e4-ac4f-534e9e91bf43": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079789014s
Jun 22 22:17:58.356: INFO: Pod "downward-api-8acba155-86bc-41e4-ac4f-534e9e91bf43": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080957525s
Jun 22 22:18:00.354: INFO: Pod "downward-api-8acba155-86bc-41e4-ac4f-534e9e91bf43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078636459s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:00.354: INFO: Pod "downward-api-8acba155-86bc-41e4-ac4f-534e9e91bf43" satisfied condition "Succeeded or Failed"
Jun 22 22:18:00.397: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod downward-api-8acba155-86bc-41e4-ac4f-534e9e91bf43 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:00.477: INFO: Waiting for pod downward-api-8acba155-86bc-41e4-ac4f-534e9e91bf43 to disappear
Jun 22 22:18:00.512: INFO: Pod downward-api-8acba155-86bc-41e4-ac4f-534e9e91bf43 no longer exists
[AfterEach] [sig-node] Downward API
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.676 seconds][0m
[sig-node] Downward API
[90mtest/e2e/common/node/framework.go:23[0m
should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
[90mtest/e2e/common/node/downwardapi.go:110[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":6,"skipped":81,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:00.634: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 126 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume at the same time
[90mtest/e2e/storage/persistent_volumes-local.go:250[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:251[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":51,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:00.768: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: emptydir]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver emptydir doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 27 lines ...
[1mSTEP[0m: Building a namespace api object, basename containers
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test override command
Jun 22 22:17:54.019: INFO: Waiting up to 5m0s for pod "client-containers-fa9f10e6-de1d-44f9-8a13-bf35f7d66dd6" in namespace "containers-5490" to be "Succeeded or Failed"
Jun 22 22:17:54.052: INFO: Pod "client-containers-fa9f10e6-de1d-44f9-8a13-bf35f7d66dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.914835ms
Jun 22 22:17:56.086: INFO: Pod "client-containers-fa9f10e6-de1d-44f9-8a13-bf35f7d66dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067480364s
Jun 22 22:17:58.094: INFO: Pod "client-containers-fa9f10e6-de1d-44f9-8a13-bf35f7d66dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074987766s
Jun 22 22:18:00.088: INFO: Pod "client-containers-fa9f10e6-de1d-44f9-8a13-bf35f7d66dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069054332s
Jun 22 22:18:02.085: INFO: Pod "client-containers-fa9f10e6-de1d-44f9-8a13-bf35f7d66dd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066282618s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:02.085: INFO: Pod "client-containers-fa9f10e6-de1d-44f9-8a13-bf35f7d66dd6" satisfied condition "Succeeded or Failed"
Jun 22 22:18:02.119: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod client-containers-fa9f10e6-de1d-44f9-8a13-bf35f7d66dd6 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:02.210: INFO: Waiting for pod client-containers-fa9f10e6-de1d-44f9-8a13-bf35f7d66dd6 to disappear
Jun 22 22:18:02.244: INFO: Pod client-containers-fa9f10e6-de1d-44f9-8a13-bf35f7d66dd6 no longer exists
[AfterEach] [sig-node] Containers
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.581 seconds][0m
[sig-node] Containers
[90mtest/e2e/common/node/framework.go:23[0m
should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":113,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:02.344: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 44 lines ...
[32m• [SLOW TEST:12.759 seconds][0m
[sig-apps] DisruptionController
[90mtest/e2e/apps/framework.go:23[0m
evictions: enough pods, replicaSet, percentage => should allow an eviction
[90mtest/e2e/apps/disruption.go:289[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":10,"skipped":97,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:02.472: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 48 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:03.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "networkpolicies-5521" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":11,"skipped":103,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Pods Extended
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 129 lines ...
[90mtest/e2e/node/framework.go:23[0m
Pod Container Status
[90mtest/e2e/node/pods.go:202[0m
should never report container start when an init container fails
[90mtest/e2e/node/pods.go:216[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails","total":-1,"completed":6,"skipped":52,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:04.785: INFO: Only supported for providers [aws] (not gce)
... skipping 252 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should create read-only inline ephemeral volume
[90mtest/e2e/storage/testsuites/ephemeral.go:175[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":3,"skipped":27,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 18 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:09.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "tables-2924" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":4,"skipped":30,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:09.774: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 62 lines ...
[32m• [SLOW TEST:9.299 seconds][0m
[sig-node] PrivilegedPod [NodeConformance]
[90mtest/e2e/common/node/framework.go:23[0m
should enable privileged commands [LinuxOnly]
[90mtest/e2e/common/node/privileged.go:52[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":12,"skipped":118,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:11.673: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:187
... skipping 22 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-test-volume-92047290-0dbf-4d7c-b833-090616405b97
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 22:17:58.195: INFO: Waiting up to 5m0s for pod "pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6" in namespace "configmap-2457" to be "Succeeded or Failed"
Jun 22 22:17:58.229: INFO: Pod "pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6": Phase="Pending", Reason="", readiness=false. Elapsed: 34.097963ms
Jun 22 22:18:00.279: INFO: Pod "pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08349457s
Jun 22 22:18:02.265: INFO: Pod "pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069731319s
Jun 22 22:18:04.264: INFO: Pod "pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068483492s
Jun 22 22:18:06.264: INFO: Pod "pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069119822s
Jun 22 22:18:08.264: INFO: Pod "pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068825293s
Jun 22 22:18:10.264: INFO: Pod "pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069052598s
Jun 22 22:18:12.263: INFO: Pod "pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.068251151s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:12.264: INFO: Pod "pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6" satisfied condition "Succeeded or Failed"
Jun 22 22:18:12.297: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:12.378: INFO: Waiting for pod pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6 to disappear
Jun 22 22:18:12.419: INFO: Pod pod-configmaps-57ea2b39-c52f-48a7-8a52-d9da6f9d59f6 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:14.613 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":64,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Services
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 77 lines ...
[32m• [SLOW TEST:29.562 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should serve multiport endpoints from pods [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":5,"skipped":70,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:13.244: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 66 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide container's cpu limit [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 22 22:18:03.818: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e69fdd5c-2978-4f9b-9006-354691e75487" in namespace "downward-api-4358" to be "Succeeded or Failed"
Jun 22 22:18:03.862: INFO: Pod "downwardapi-volume-e69fdd5c-2978-4f9b-9006-354691e75487": Phase="Pending", Reason="", readiness=false. Elapsed: 43.880764ms
Jun 22 22:18:05.905: INFO: Pod "downwardapi-volume-e69fdd5c-2978-4f9b-9006-354691e75487": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086793942s
Jun 22 22:18:07.905: INFO: Pod "downwardapi-volume-e69fdd5c-2978-4f9b-9006-354691e75487": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086665201s
Jun 22 22:18:09.899: INFO: Pod "downwardapi-volume-e69fdd5c-2978-4f9b-9006-354691e75487": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080612422s
Jun 22 22:18:11.899: INFO: Pod "downwardapi-volume-e69fdd5c-2978-4f9b-9006-354691e75487": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08058354s
Jun 22 22:18:13.899: INFO: Pod "downwardapi-volume-e69fdd5c-2978-4f9b-9006-354691e75487": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08104779s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:13.899: INFO: Pod "downwardapi-volume-e69fdd5c-2978-4f9b-9006-354691e75487" satisfied condition "Succeeded or Failed"
Jun 22 22:18:13.935: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod downwardapi-volume-e69fdd5c-2978-4f9b-9006-354691e75487 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:14.025: INFO: Waiting for pod downwardapi-volume-e69fdd5c-2978-4f9b-9006-354691e75487 to disappear
Jun 22 22:18:14.061: INFO: Pod downwardapi-volume-e69fdd5c-2978-4f9b-9006-354691e75487 no longer exists
[AfterEach] [sig-storage] Downward API volume
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.606 seconds][0m
[sig-storage] Downward API volume
[90mtest/e2e/common/storage/framework.go:23[0m
should provide container's cpu limit [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":104,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:14.158: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[32m• [SLOW TEST:11.071 seconds][0m
[sig-network] CVE-2021-29923
[90mtest/e2e/network/common/framework.go:23[0m
IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
[90mtest/e2e/network/funny_ips.go:92[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal","total":-1,"completed":7,"skipped":54,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] DNS
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 31 lines ...
[32m• [SLOW TEST:10.714 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should provide /etc/hosts entries for the cluster [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]","total":-1,"completed":5,"skipped":33,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:20.523: INFO: Only supported for providers [azure] (not gce)
... skipping 51 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: dir-bindmounted]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 26 lines ...
[90mtest/e2e/common/node/runtime.go:43[0m
on terminated container
[90mtest/e2e/common/node/runtime.go:136[0m
should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:22.643: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
test/e2e/framework/framework.go:187
... skipping 69 lines ...
[32m• [SLOW TEST:12.693 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
pod should support shared volumes between containers [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":12,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:25.216: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/framework/framework.go:187
... skipping 277 lines ...
[32m• [SLOW TEST:24.689 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should provide DNS for the cluster [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":10,"skipped":58,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:25.501: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 56 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:25.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "protocol-6513" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":11,"skipped":65,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:26.014: INFO: Only supported for providers [azure] (not gce)
... skipping 143 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
test/e2e/common/storage/host_path.go:39
[It] should support subPath [NodeConformance]
test/e2e/common/storage/host_path.go:95
[1mSTEP[0m: Creating a pod to test hostPath subPath
Jun 22 22:18:22.946: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6325" to be "Succeeded or Failed"
Jun 22 22:18:22.981: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 34.689523ms
Jun 22 22:18:25.017: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070454996s
Jun 22 22:18:27.020: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073638345s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:27.020: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jun 22 22:18:27.054: INFO: Trying to get logs from node nodes-us-east1-b-vf6p pod pod-host-path-test container test-container-2: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:27.132: INFO: Waiting for pod pod-host-path-test to disappear
Jun 22 22:18:27.166: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
test/e2e/framework/framework.go:187
Jun 22 22:18:27.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "hostpath-6325" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":9,"skipped":59,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:27.267: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 57 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:27.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-3776" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":10,"skipped":65,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:27.875: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 186 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (block volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":43,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:28.041: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 53 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:29.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-4395" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":13,"skipped":72,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 26 lines ...
Jun 22 22:18:10.225: INFO: PersistentVolumeClaim pvc-6mmwd found but phase is Pending instead of Bound.
Jun 22 22:18:12.260: INFO: PersistentVolumeClaim pvc-6mmwd found and phase=Bound (4.10683396s)
Jun 22 22:18:12.260: INFO: Waiting up to 3m0s for PersistentVolume local-cgt6t to have phase Bound
Jun 22 22:18:12.295: INFO: PersistentVolume local-cgt6t found and phase=Bound (34.431729ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-7tc2
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:18:12.407: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7tc2" in namespace "provisioning-585" to be "Succeeded or Failed"
Jun 22 22:18:12.442: INFO: Pod "pod-subpath-test-preprovisionedpv-7tc2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.177278ms
Jun 22 22:18:14.484: INFO: Pod "pod-subpath-test-preprovisionedpv-7tc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077337546s
Jun 22 22:18:16.477: INFO: Pod "pod-subpath-test-preprovisionedpv-7tc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070197741s
Jun 22 22:18:18.485: INFO: Pod "pod-subpath-test-preprovisionedpv-7tc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078542599s
Jun 22 22:18:20.480: INFO: Pod "pod-subpath-test-preprovisionedpv-7tc2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07313581s
Jun 22 22:18:22.480: INFO: Pod "pod-subpath-test-preprovisionedpv-7tc2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073157885s
Jun 22 22:18:24.478: INFO: Pod "pod-subpath-test-preprovisionedpv-7tc2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.070937565s
Jun 22 22:18:26.478: INFO: Pod "pod-subpath-test-preprovisionedpv-7tc2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.071177256s
Jun 22 22:18:28.478: INFO: Pod "pod-subpath-test-preprovisionedpv-7tc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.071443033s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:28.478: INFO: Pod "pod-subpath-test-preprovisionedpv-7tc2" satisfied condition "Succeeded or Failed"
Jun 22 22:18:28.521: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-preprovisionedpv-7tc2 container test-container-volume-preprovisionedpv-7tc2: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:28.623: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7tc2 to disappear
Jun 22 22:18:28.657: INFO: Pod pod-subpath-test-preprovisionedpv-7tc2 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-7tc2
Jun 22 22:18:28.658: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7tc2" in namespace "provisioning-585"
... skipping 308 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and read from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:234[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":11,"skipped":99,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 37 lines ...
[32m• [SLOW TEST:16.880 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
patching/updating a validating webhook should work [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":6,"skipped":76,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:30.177: INFO: Only supported for providers [vsphere] (not gce)
... skipping 186 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume at the same time
[90mtest/e2e/storage/persistent_volumes-local.go:250[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:251[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":87,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:30.520: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 59 lines ...
[32m• [SLOW TEST:16.747 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should be able to change the type from ExternalName to ClusterIP [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":13,"skipped":109,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:30.944: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 19 lines ...
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:18:30.554: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename secrets
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating projection with secret that has name secret-emptykey-test-ba1e0ae9-896f-4ee7-99bf-ac1c40bb6c07
[AfterEach] [sig-node] Secrets
test/e2e/framework/framework.go:187
Jun 22 22:18:30.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "secrets-6109" for this suite.
[32m•[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":11,"skipped":96,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:30.959: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 62 lines ...
Jun 22 22:17:10.982: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8973
Jun 22 22:17:11.025: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8973
Jun 22 22:17:11.062: INFO: creating *v1.StatefulSet: csi-mock-volumes-8973-5292/csi-mockplugin
Jun 22 22:17:11.101: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8973
Jun 22 22:17:11.153: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8973"
Jun 22 22:17:11.187: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8973 to register on node nodes-us-east1-b-vgn6
I0622 22:17:18.645989 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8973","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0622 22:17:18.846852 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0622 22:17:18.881501 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8973","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0622 22:17:18.920487 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0622 22:17:18.955944 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0622 22:17:19.667306 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8973"},"Error":"","FullError":null}
[1mSTEP[0m: Creating pod
Jun 22 22:17:20.906: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 22 22:17:20.944: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-mhm49] to have phase Bound
I0622 22:17:20.953886 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-66db86ce-9960-4947-a02c-a2fc4157567e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
Jun 22 22:17:20.978: INFO: PersistentVolumeClaim pvc-mhm49 found but phase is Pending instead of Bound.
I0622 22:17:21.989750 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-66db86ce-9960-4947-a02c-a2fc4157567e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-66db86ce-9960-4947-a02c-a2fc4157567e"}}},"Error":"","FullError":null}
Jun 22 22:17:23.015: INFO: PersistentVolumeClaim pvc-mhm49 found and phase=Bound (2.071140348s)
Jun 22 22:17:23.127: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-wx5w2" in namespace "csi-mock-volumes-8973" to be "running"
Jun 22 22:17:23.164: INFO: Pod "pvc-volume-tester-wx5w2": Phase="Pending", Reason="", readiness=false. Elapsed: 36.406221ms
I0622 22:17:23.371344 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:17:23.407022 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:17:23.446707 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 22 22:17:23.482: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:17:23.483: INFO: ExecWithOptions: Clientset creation
Jun 22 22:17:23.483: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-8973-5292/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-8973%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-8973%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
I0622 22:17:23.769227 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-8973/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-66db86ce-9960-4947-a02c-a2fc4157567e","storage.kubernetes.io/csiProvisionerIdentity":"1655936238972-8081-csi-mock-csi-mock-volumes-8973"}},"Response":{},"Error":"","FullError":null}
I0622 22:17:23.803618 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:17:23.838579 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:17:23.873358 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 22 22:17:23.919: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:17:23.919: INFO: ExecWithOptions: Clientset creation
Jun 22 22:17:23.920: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-8973-5292/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Ffe68e818-9724-4b4a-b8b8-1470c93c63fe%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-66db86ce-9960-4947-a02c-a2fc4157567e%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Ffe68e818-9724-4b4a-b8b8-1470c93c63fe%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-66db86ce-9960-4947-a02c-a2fc4157567e%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 22 22:17:24.179: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:17:24.180: INFO: ExecWithOptions: Clientset creation
Jun 22 22:17:24.180: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-8973-5292/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Ffe68e818-9724-4b4a-b8b8-1470c93c63fe%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-66db86ce-9960-4947-a02c-a2fc4157567e%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Ffe68e818-9724-4b4a-b8b8-1470c93c63fe%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-66db86ce-9960-4947-a02c-a2fc4157567e%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 22 22:17:24.439: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:17:24.440: INFO: ExecWithOptions: Clientset creation
Jun 22 22:17:24.440: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-8973-5292/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2Ffe68e818-9724-4b4a-b8b8-1470c93c63fe%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-66db86ce-9960-4947-a02c-a2fc4157567e%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0622 22:17:24.711675 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-8973/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","target_path":"/var/lib/kubelet/pods/fe68e818-9724-4b4a-b8b8-1470c93c63fe/volumes/kubernetes.io~csi/pvc-66db86ce-9960-4947-a02c-a2fc4157567e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-66db86ce-9960-4947-a02c-a2fc4157567e","storage.kubernetes.io/csiProvisionerIdentity":"1655936238972-8081-csi-mock-csi-mock-volumes-8973"}},"Response":{},"Error":"","FullError":null}
Jun 22 22:17:25.199: INFO: Pod "pvc-volume-tester-wx5w2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071884313s
Jun 22 22:17:27.200: INFO: Pod "pvc-volume-tester-wx5w2": Phase="Running", Reason="", readiness=true. Elapsed: 4.072940318s
Jun 22 22:17:27.201: INFO: Pod "pvc-volume-tester-wx5w2" satisfied condition "running"
Jun 22 22:17:27.201: INFO: Deleting pod "pvc-volume-tester-wx5w2" in namespace "csi-mock-volumes-8973"
Jun 22 22:17:27.238: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wx5w2" to be fully deleted
Jun 22 22:17:28.318: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 22:17:28.319: INFO: ExecWithOptions: Clientset creation
Jun 22 22:17:28.319: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/csi-mock-volumes-8973-5292/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2Ffe68e818-9724-4b4a-b8b8-1470c93c63fe%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-66db86ce-9960-4947-a02c-a2fc4157567e%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0622 22:17:28.599545 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/fe68e818-9724-4b4a-b8b8-1470c93c63fe/volumes/kubernetes.io~csi/pvc-66db86ce-9960-4947-a02c-a2fc4157567e/mount"},"Response":{},"Error":"","FullError":null}
I0622 22:17:28.716266 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 22:17:28.756053 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-8973/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount"},"Response":{},"Error":"","FullError":null}
I0622 22:17:31.374325 7214 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
[1mSTEP[0m: Checking PVC events
Jun 22 22:17:32.348: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mhm49", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8973", SelfLink:"", UID:"66db86ce-9960-4947-a02c-a2fc4157567e", ResourceVersion:"9114", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 22, 17, 20, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 17, 20, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002296858), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003d427c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003d427d0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 22:17:32.348: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mhm49", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8973", SelfLink:"", UID:"66db86ce-9960-4947-a02c-a2fc4157567e", ResourceVersion:"9115", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 22, 17, 20, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8973", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8973"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 17, 20, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00242a5b8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 17, 20, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00242a5e8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003d91350), VolumeMode:(*v1.PersistentVolumeMode)(0xc003d91360), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 22:17:32.348: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mhm49", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8973", SelfLink:"", UID:"66db86ce-9960-4947-a02c-a2fc4157567e", ResourceVersion:"9146", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 22, 17, 20, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8973", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8973"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 17, 20, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001af1530), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 17, 22, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001af1560), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-66db86ce-9960-4947-a02c-a2fc4157567e", StorageClassName:(*string)(0xc00291f1c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00291f1d0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 22:17:32.348: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mhm49", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8973", SelfLink:"", UID:"66db86ce-9960-4947-a02c-a2fc4157567e", ResourceVersion:"9147", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 22, 17, 20, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8973", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8973"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 17, 20, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001af15a8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 17, 22, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001af1668), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 17, 22, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001af17e8), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-66db86ce-9960-4947-a02c-a2fc4157567e", StorageClassName:(*string)(0xc00291f200), VolumeMode:(*v1.PersistentVolumeMode)(0xc00291f210), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 22:17:32.348: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mhm49", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8973", SelfLink:"", UID:"66db86ce-9960-4947-a02c-a2fc4157567e", ResourceVersion:"9472", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 22, 17, 20, 0, time.Local), DeletionTimestamp:time.Date(2022, time.June, 22, 22, 17, 31, 0, time.Local), DeletionGracePeriodSeconds:(*int64)(0xc0033ac398), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8973", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8973"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 17, 20, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001af1848), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 17, 22, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001af1878), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 22, 17, 22, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001af18a8), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-66db86ce-9960-4947-a02c-a2fc4157567e", StorageClassName:(*string)(0xc00291f250), VolumeMode:(*v1.PersistentVolumeMode)(0xc00291f260), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
... skipping 48 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
storage capacity
[90mtest/e2e/storage/csi_mock_volume.go:1100[0m
exhausted, immediate binding
[90mtest/e2e/storage/csi_mock_volume.go:1158[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":14,"skipped":113,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PV Protection
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 30 lines ...
Jun 22 22:18:31.703: INFO: AfterEach: Cleaning up test resources.
Jun 22 22:18:31.703: INFO: Deleting PersistentVolumeClaim "pvc-qljfn"
Jun 22 22:18:31.738: INFO: Deleting PersistentVolume "hostpath-cpzqk"
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":14,"skipped":115,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 18 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:32.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-9887" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":15,"skipped":116,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 24 lines ...
[32m• [SLOW TEST:7.194 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should update labels on modification [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:33.240: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 116 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and read from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:234[0m
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":129,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:33.315: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 103 lines ...
[32m• [SLOW TEST:26.983 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
[90mtest/e2e/network/service.go:1207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":5,"skipped":39,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:34.597: INFO: Only supported for providers [openstack] (not gce)
... skipping 70 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:34.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-7799" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":13,"skipped":90,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] Deployment
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 65 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:34.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "deployment-7531" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":72,"failed":0}
[BeforeEach] [sig-node] Security Context
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:18:29.223: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename security-context-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
test/e2e/common/node/security_context.go:101
Jun 22 22:18:29.501: INFO: Waiting up to 5m0s for pod "busybox-user-0-3a81401e-f223-4f57-af68-8681f65e0f25" in namespace "security-context-test-9675" to be "Succeeded or Failed"
Jun 22 22:18:29.535: INFO: Pod "busybox-user-0-3a81401e-f223-4f57-af68-8681f65e0f25": Phase="Pending", Reason="", readiness=false. Elapsed: 34.434165ms
Jun 22 22:18:31.571: INFO: Pod "busybox-user-0-3a81401e-f223-4f57-af68-8681f65e0f25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070079424s
Jun 22 22:18:33.572: INFO: Pod "busybox-user-0-3a81401e-f223-4f57-af68-8681f65e0f25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070870515s
Jun 22 22:18:35.575: INFO: Pod "busybox-user-0-3a81401e-f223-4f57-af68-8681f65e0f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074051227s
Jun 22 22:18:35.575: INFO: Pod "busybox-user-0-3a81401e-f223-4f57-af68-8681f65e0f25" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 22 22:18:35.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-9675" for this suite.
... skipping 2 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
When creating a container with runAsUser
[90mtest/e2e/common/node/security_context.go:52[0m
should run the container with uid 0 [LinuxOnly] [NodeConformance]
[90mtest/e2e/common/node/security_context.go:101[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":9,"skipped":72,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Conntrack
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 389 lines ...
[32m• [SLOW TEST:82.881 seconds][0m
[sig-network] Conntrack
[90mtest/e2e/network/common/framework.go:23[0m
should drop INVALID conntrack entries [Privileged]
[90mtest/e2e/network/conntrack.go:363[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]","total":-1,"completed":6,"skipped":60,"failed":0}
[BeforeEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:18:37.329: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename configmap
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:37.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "configmap-4061" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":7,"skipped":60,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:37.906: INFO: Only supported for providers [azure] (not gce)
... skipping 99 lines ...
Jun 22 22:18:25.643: INFO: PersistentVolumeClaim pvc-n92xn found but phase is Pending instead of Bound.
Jun 22 22:18:27.685: INFO: PersistentVolumeClaim pvc-n92xn found and phase=Bound (16.335507211s)
Jun 22 22:18:27.685: INFO: Waiting up to 3m0s for PersistentVolume local-jmjqx to have phase Bound
Jun 22 22:18:27.719: INFO: PersistentVolume local-jmjqx found and phase=Bound (33.995137ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-dg85
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:18:27.830: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dg85" in namespace "provisioning-874" to be "Succeeded or Failed"
Jun 22 22:18:27.865: INFO: Pod "pod-subpath-test-preprovisionedpv-dg85": Phase="Pending", Reason="", readiness=false. Elapsed: 34.698604ms
Jun 22 22:18:29.900: INFO: Pod "pod-subpath-test-preprovisionedpv-dg85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069831444s
Jun 22 22:18:31.899: INFO: Pod "pod-subpath-test-preprovisionedpv-dg85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069245444s
Jun 22 22:18:33.902: INFO: Pod "pod-subpath-test-preprovisionedpv-dg85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072081138s
Jun 22 22:18:35.901: INFO: Pod "pod-subpath-test-preprovisionedpv-dg85": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070834087s
Jun 22 22:18:37.902: INFO: Pod "pod-subpath-test-preprovisionedpv-dg85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071868865s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:37.902: INFO: Pod "pod-subpath-test-preprovisionedpv-dg85" satisfied condition "Succeeded or Failed"
Jun 22 22:18:37.940: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-subpath-test-preprovisionedpv-dg85 container test-container-subpath-preprovisionedpv-dg85: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:38.044: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dg85 to disappear
Jun 22 22:18:38.080: INFO: Pod pod-subpath-test-preprovisionedpv-dg85 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-dg85
Jun 22 22:18:38.080: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dg85" in namespace "provisioning-874"
... skipping 26 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":44,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Pods
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 18 lines ...
Jun 22 22:18:30.914: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ad36cd64-8827-4d5a-82b6-873a2547bfeb"
Jun 22 22:18:30.915: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ad36cd64-8827-4d5a-82b6-873a2547bfeb" in namespace "pods-1615" to be "terminated with reason DeadlineExceeded"
Jun 22 22:18:30.952: INFO: Pod "pod-update-activedeadlineseconds-ad36cd64-8827-4d5a-82b6-873a2547bfeb": Phase="Running", Reason="", readiness=true. Elapsed: 37.750228ms
Jun 22 22:18:32.988: INFO: Pod "pod-update-activedeadlineseconds-ad36cd64-8827-4d5a-82b6-873a2547bfeb": Phase="Running", Reason="", readiness=true. Elapsed: 2.073593235s
Jun 22 22:18:34.990: INFO: Pod "pod-update-activedeadlineseconds-ad36cd64-8827-4d5a-82b6-873a2547bfeb": Phase="Running", Reason="", readiness=true. Elapsed: 4.075439078s
Jun 22 22:18:36.994: INFO: Pod "pod-update-activedeadlineseconds-ad36cd64-8827-4d5a-82b6-873a2547bfeb": Phase="Running", Reason="", readiness=true. Elapsed: 6.07945979s
Jun 22 22:18:38.988: INFO: Pod "pod-update-activedeadlineseconds-ad36cd64-8827-4d5a-82b6-873a2547bfeb": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 8.073904484s
Jun 22 22:18:38.989: INFO: Pod "pod-update-activedeadlineseconds-ad36cd64-8827-4d5a-82b6-873a2547bfeb" satisfied condition "terminated with reason DeadlineExceeded"
[AfterEach] [sig-node] Pods
test/e2e/framework/framework.go:187
Jun 22 22:18:38.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "pods-1615" for this suite.
[32m• [SLOW TEST:11.155 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":75,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 33 lines ...
Jun 22 22:18:09.969: INFO: PersistentVolumeClaim pvc-cj6h7 found but phase is Pending instead of Bound.
Jun 22 22:18:12.005: INFO: PersistentVolumeClaim pvc-cj6h7 found and phase=Bound (6.149723156s)
Jun 22 22:18:12.005: INFO: Waiting up to 3m0s for PersistentVolume local-2ps6n to have phase Bound
Jun 22 22:18:12.042: INFO: PersistentVolume local-2ps6n found and phase=Bound (36.130222ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-smdq
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:18:12.149: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-smdq" in namespace "provisioning-8918" to be "Succeeded or Failed"
Jun 22 22:18:12.185: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 35.651747ms
Jun 22 22:18:14.230: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080824368s
Jun 22 22:18:16.222: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07217919s
Jun 22 22:18:18.223: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073356545s
Jun 22 22:18:20.221: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07191537s
Jun 22 22:18:22.224: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.074358051s
Jun 22 22:18:24.221: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.071520111s
Jun 22 22:18:26.225: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.075214172s
Jun 22 22:18:28.225: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.075061924s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:28.225: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq" satisfied condition "Succeeded or Failed"
Jun 22 22:18:28.260: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-preprovisionedpv-smdq container test-container-subpath-preprovisionedpv-smdq: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:28.352: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-smdq to disappear
Jun 22 22:18:28.413: INFO: Pod pod-subpath-test-preprovisionedpv-smdq no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-smdq
Jun 22 22:18:28.413: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-smdq" in namespace "provisioning-8918"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-smdq
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:18:28.495: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-smdq" in namespace "provisioning-8918" to be "Succeeded or Failed"
Jun 22 22:18:28.532: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.827169ms
Jun 22 22:18:30.576: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080466847s
Jun 22 22:18:32.569: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073832218s
Jun 22 22:18:34.568: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073358421s
Jun 22 22:18:36.571: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075911137s
Jun 22 22:18:38.571: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075392258s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:38.571: INFO: Pod "pod-subpath-test-preprovisionedpv-smdq" satisfied condition "Succeeded or Failed"
Jun 22 22:18:38.608: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-preprovisionedpv-smdq container test-container-subpath-preprovisionedpv-smdq: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:38.690: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-smdq to disappear
Jun 22 22:18:38.724: INFO: Pod pod-subpath-test-preprovisionedpv-smdq no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-smdq
Jun 22 22:18:38.724: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-smdq" in namespace "provisioning-8918"
... skipping 42 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide container's memory limit [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 22 22:18:33.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c7a0e85-3040-4ada-a988-e3bc8d7b57be" in namespace "downward-api-6115" to be "Succeeded or Failed"
Jun 22 22:18:33.645: INFO: Pod "downwardapi-volume-3c7a0e85-3040-4ada-a988-e3bc8d7b57be": Phase="Pending", Reason="", readiness=false. Elapsed: 35.45803ms
Jun 22 22:18:35.681: INFO: Pod "downwardapi-volume-3c7a0e85-3040-4ada-a988-e3bc8d7b57be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07136122s
Jun 22 22:18:37.685: INFO: Pod "downwardapi-volume-3c7a0e85-3040-4ada-a988-e3bc8d7b57be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075743724s
Jun 22 22:18:39.697: INFO: Pod "downwardapi-volume-3c7a0e85-3040-4ada-a988-e3bc8d7b57be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087533531s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:39.697: INFO: Pod "downwardapi-volume-3c7a0e85-3040-4ada-a988-e3bc8d7b57be" satisfied condition "Succeeded or Failed"
Jun 22 22:18:39.740: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod downwardapi-volume-3c7a0e85-3040-4ada-a988-e3bc8d7b57be container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:39.871: INFO: Waiting for pod downwardapi-volume-3c7a0e85-3040-4ada-a988-e3bc8d7b57be to disappear
Jun 22 22:18:39.916: INFO: Pod downwardapi-volume-3c7a0e85-3040-4ada-a988-e3bc8d7b57be no longer exists
[AfterEach] [sig-storage] Downward API volume
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.687 seconds][0m
[sig-storage] Downward API volume
[90mtest/e2e/common/storage/framework.go:23[0m
should provide container's memory limit [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":130,"failed":0}
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:40.015: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 23 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support non-existent path
test/e2e/storage/testsuites/subpath.go:196
Jun 22 22:18:11.941: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 22:18:12.024: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8535" in namespace "provisioning-8535" to be "Succeeded or Failed"
Jun 22 22:18:12.059: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Pending", Reason="", readiness=false. Elapsed: 34.370137ms
Jun 22 22:18:14.096: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071464549s
Jun 22 22:18:16.097: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072550011s
Jun 22 22:18:18.093: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068810781s
Jun 22 22:18:20.095: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07072901s
Jun 22 22:18:22.094: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06939554s
Jun 22 22:18:24.094: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.069171646s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:24.094: INFO: Pod "hostpath-symlink-prep-provisioning-8535" satisfied condition "Succeeded or Failed"
Jun 22 22:18:24.094: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8535" in namespace "provisioning-8535"
Jun 22 22:18:24.136: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8535" to be fully deleted
Jun 22 22:18:24.168: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-kf8t
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:18:24.204: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kf8t" in namespace "provisioning-8535" to be "Succeeded or Failed"
Jun 22 22:18:24.237: INFO: Pod "pod-subpath-test-inlinevolume-kf8t": Phase="Pending", Reason="", readiness=false. Elapsed: 33.752426ms
Jun 22 22:18:26.273: INFO: Pod "pod-subpath-test-inlinevolume-kf8t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069288908s
Jun 22 22:18:28.272: INFO: Pod "pod-subpath-test-inlinevolume-kf8t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067883324s
Jun 22 22:18:30.275: INFO: Pod "pod-subpath-test-inlinevolume-kf8t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070856018s
Jun 22 22:18:32.274: INFO: Pod "pod-subpath-test-inlinevolume-kf8t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070512362s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:32.274: INFO: Pod "pod-subpath-test-inlinevolume-kf8t" satisfied condition "Succeeded or Failed"
Jun 22 22:18:32.311: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-inlinevolume-kf8t container test-container-volume-inlinevolume-kf8t: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:32.404: INFO: Waiting for pod pod-subpath-test-inlinevolume-kf8t to disappear
Jun 22 22:18:32.438: INFO: Pod pod-subpath-test-inlinevolume-kf8t no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-kf8t
Jun 22 22:18:32.438: INFO: Deleting pod "pod-subpath-test-inlinevolume-kf8t" in namespace "provisioning-8535"
[1mSTEP[0m: Deleting pod
Jun 22 22:18:32.471: INFO: Deleting pod "pod-subpath-test-inlinevolume-kf8t" in namespace "provisioning-8535"
Jun 22 22:18:32.541: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8535" in namespace "provisioning-8535" to be "Succeeded or Failed"
Jun 22 22:18:32.578: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Pending", Reason="", readiness=false. Elapsed: 36.864324ms
Jun 22 22:18:34.614: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072489893s
Jun 22 22:18:36.613: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071675749s
Jun 22 22:18:38.613: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071949897s
Jun 22 22:18:40.614: INFO: Pod "hostpath-symlink-prep-provisioning-8535": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072484375s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:40.614: INFO: Pod "hostpath-symlink-prep-provisioning-8535" satisfied condition "Succeeded or Failed"
Jun 22 22:18:40.614: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8535" in namespace "provisioning-8535"
Jun 22 22:18:40.657: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8535" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
Jun 22 22:18:40.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-8535" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":13,"skipped":123,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:18:40.790: INFO: >>> kubeConfig: /root/.kube/config
... skipping 32 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-projected-all-test-volume-65124464-acef-4da3-b0e1-47db39c02336
[1mSTEP[0m: Creating secret with name secret-projected-all-test-volume-9b1fdb6f-17c2-447e-bb13-db330dd974d4
[1mSTEP[0m: Creating a pod to test Check all projections for projected volume plugin
Jun 22 22:18:35.201: INFO: Waiting up to 5m0s for pod "projected-volume-e16a3bc1-e8a8-4433-9f07-af0fc920cb2e" in namespace "projected-5713" to be "Succeeded or Failed"
Jun 22 22:18:35.247: INFO: Pod "projected-volume-e16a3bc1-e8a8-4433-9f07-af0fc920cb2e": Phase="Pending", Reason="", readiness=false. Elapsed: 45.588561ms
Jun 22 22:18:37.289: INFO: Pod "projected-volume-e16a3bc1-e8a8-4433-9f07-af0fc920cb2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087415503s
Jun 22 22:18:39.283: INFO: Pod "projected-volume-e16a3bc1-e8a8-4433-9f07-af0fc920cb2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082030272s
Jun 22 22:18:41.284: INFO: Pod "projected-volume-e16a3bc1-e8a8-4433-9f07-af0fc920cb2e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083120218s
Jun 22 22:18:43.282: INFO: Pod "projected-volume-e16a3bc1-e8a8-4433-9f07-af0fc920cb2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081186285s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:43.283: INFO: Pod "projected-volume-e16a3bc1-e8a8-4433-9f07-af0fc920cb2e" satisfied condition "Succeeded or Failed"
Jun 22 22:18:43.318: INFO: Trying to get logs from node nodes-us-east1-b-vf6p pod projected-volume-e16a3bc1-e8a8-4433-9f07-af0fc920cb2e container projected-all-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:43.412: INFO: Waiting for pod projected-volume-e16a3bc1-e8a8-4433-9f07-af0fc920cb2e to disappear
Jun 22 22:18:43.447: INFO: Pod projected-volume-e16a3bc1-e8a8-4433-9f07-af0fc920cb2e no longer exists
[AfterEach] [sig-storage] Projected combined
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.769 seconds][0m
[sig-storage] Projected combined
[90mtest/e2e/common/storage/framework.go:23[0m
should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":96,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:43.551: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 135 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
test/e2e/common/storage/empty_dir.go:55
[1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs
Jun 22 22:18:35.994: INFO: Waiting up to 5m0s for pod "pod-854b5392-bebd-4fb8-9df3-dd32fadc1abe" in namespace "emptydir-2750" to be "Succeeded or Failed"
Jun 22 22:18:36.029: INFO: Pod "pod-854b5392-bebd-4fb8-9df3-dd32fadc1abe": Phase="Pending", Reason="", readiness=false. Elapsed: 35.263133ms
Jun 22 22:18:38.065: INFO: Pod "pod-854b5392-bebd-4fb8-9df3-dd32fadc1abe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071437102s
Jun 22 22:18:40.075: INFO: Pod "pod-854b5392-bebd-4fb8-9df3-dd32fadc1abe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08087637s
Jun 22 22:18:42.070: INFO: Pod "pod-854b5392-bebd-4fb8-9df3-dd32fadc1abe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07601413s
Jun 22 22:18:44.067: INFO: Pod "pod-854b5392-bebd-4fb8-9df3-dd32fadc1abe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073519931s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:44.067: INFO: Pod "pod-854b5392-bebd-4fb8-9df3-dd32fadc1abe" satisfied condition "Succeeded or Failed"
Jun 22 22:18:44.108: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-854b5392-bebd-4fb8-9df3-dd32fadc1abe container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:44.203: INFO: Waiting for pod pod-854b5392-bebd-4fb8-9df3-dd32fadc1abe to disappear
Jun 22 22:18:44.241: INFO: Pod pod-854b5392-bebd-4fb8-9df3-dd32fadc1abe no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 6 lines ...
[90mtest/e2e/common/storage/framework.go:23[0m
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/empty_dir.go:48[0m
new files should be created with FSGroup ownership when container is root
[90mtest/e2e/common/storage/empty_dir.go:55[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":10,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:18:44.334: INFO: >>> kubeConfig: /root/.kube/config
... skipping 100 lines ...
[It] should support existing directory
test/e2e/storage/testsuites/subpath.go:207
Jun 22 22:18:34.880: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 22 22:18:34.880: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-xj5c
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:18:34.919: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-xj5c" in namespace "provisioning-7890" to be "Succeeded or Failed"
Jun 22 22:18:34.955: INFO: Pod "pod-subpath-test-inlinevolume-xj5c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.192916ms
Jun 22 22:18:36.991: INFO: Pod "pod-subpath-test-inlinevolume-xj5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071752098s
Jun 22 22:18:38.992: INFO: Pod "pod-subpath-test-inlinevolume-xj5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073002208s
Jun 22 22:18:40.991: INFO: Pod "pod-subpath-test-inlinevolume-xj5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07189885s
Jun 22 22:18:42.994: INFO: Pod "pod-subpath-test-inlinevolume-xj5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074861153s
Jun 22 22:18:44.993: INFO: Pod "pod-subpath-test-inlinevolume-xj5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074403204s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:44.993: INFO: Pod "pod-subpath-test-inlinevolume-xj5c" satisfied condition "Succeeded or Failed"
Jun 22 22:18:45.035: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-inlinevolume-xj5c container test-container-volume-inlinevolume-xj5c: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:45.131: INFO: Waiting for pod pod-subpath-test-inlinevolume-xj5c to disappear
Jun 22 22:18:45.174: INFO: Pod pod-subpath-test-inlinevolume-xj5c no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-xj5c
Jun 22 22:18:45.174: INFO: Deleting pod "pod-subpath-test-inlinevolume-xj5c" in namespace "provisioning-7890"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":44,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ResourceQuota
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 118 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating secret with name secret-test-9ca0316c-39aa-4487-a669-5b8cea0d2101
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 22:18:31.443: INFO: Waiting up to 5m0s for pod "pod-secrets-2a927206-a234-4058-a876-ae723398c70d" in namespace "secrets-7926" to be "Succeeded or Failed"
Jun 22 22:18:31.492: INFO: Pod "pod-secrets-2a927206-a234-4058-a876-ae723398c70d": Phase="Pending", Reason="", readiness=false. Elapsed: 48.84512ms
Jun 22 22:18:33.533: INFO: Pod "pod-secrets-2a927206-a234-4058-a876-ae723398c70d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090460457s
Jun 22 22:18:35.533: INFO: Pod "pod-secrets-2a927206-a234-4058-a876-ae723398c70d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090586523s
Jun 22 22:18:37.540: INFO: Pod "pod-secrets-2a927206-a234-4058-a876-ae723398c70d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097527545s
Jun 22 22:18:39.532: INFO: Pod "pod-secrets-2a927206-a234-4058-a876-ae723398c70d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089727561s
Jun 22 22:18:41.535: INFO: Pod "pod-secrets-2a927206-a234-4058-a876-ae723398c70d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.091980223s
Jun 22 22:18:43.528: INFO: Pod "pod-secrets-2a927206-a234-4058-a876-ae723398c70d": Phase="Running", Reason="", readiness=true. Elapsed: 12.085240011s
Jun 22 22:18:45.532: INFO: Pod "pod-secrets-2a927206-a234-4058-a876-ae723398c70d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.089365768s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:45.532: INFO: Pod "pod-secrets-2a927206-a234-4058-a876-ae723398c70d" satisfied condition "Succeeded or Failed"
Jun 22 22:18:45.567: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-secrets-2a927206-a234-4058-a876-ae723398c70d container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:45.654: INFO: Waiting for pod pod-secrets-2a927206-a234-4058-a876-ae723398c70d to disappear
Jun 22 22:18:45.689: INFO: Pod pod-secrets-2a927206-a234-4058-a876-ae723398c70d no longer exists
[AfterEach] [sig-storage] Secrets
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:14.663 seconds][0m
[sig-storage] Secrets
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":117,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:45.778: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/framework/framework.go:187
... skipping 96 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:46.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "runtimeclass-1858" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":16,"skipped":140,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:46.332: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 60 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:382
Jun 22 22:18:25.552: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 22:18:25.647: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-30" in namespace "provisioning-30" to be "Succeeded or Failed"
Jun 22 22:18:25.682: INFO: Pod "hostpath-symlink-prep-provisioning-30": Phase="Pending", Reason="", readiness=false. Elapsed: 34.619852ms
Jun 22 22:18:27.720: INFO: Pod "hostpath-symlink-prep-provisioning-30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072902323s
Jun 22 22:18:29.717: INFO: Pod "hostpath-symlink-prep-provisioning-30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070228816s
Jun 22 22:18:31.717: INFO: Pod "hostpath-symlink-prep-provisioning-30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06969559s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:31.717: INFO: Pod "hostpath-symlink-prep-provisioning-30" satisfied condition "Succeeded or Failed"
Jun 22 22:18:31.717: INFO: Deleting pod "hostpath-symlink-prep-provisioning-30" in namespace "provisioning-30"
Jun 22 22:18:31.757: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-30" to be fully deleted
Jun 22 22:18:31.791: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-cw9j
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:18:31.831: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-cw9j" in namespace "provisioning-30" to be "Succeeded or Failed"
Jun 22 22:18:31.865: INFO: Pod "pod-subpath-test-inlinevolume-cw9j": Phase="Pending", Reason="", readiness=false. Elapsed: 33.868912ms
Jun 22 22:18:33.904: INFO: Pod "pod-subpath-test-inlinevolume-cw9j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072326733s
Jun 22 22:18:35.902: INFO: Pod "pod-subpath-test-inlinevolume-cw9j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070275359s
Jun 22 22:18:37.904: INFO: Pod "pod-subpath-test-inlinevolume-cw9j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073050968s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:37.904: INFO: Pod "pod-subpath-test-inlinevolume-cw9j" satisfied condition "Succeeded or Failed"
Jun 22 22:18:37.941: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-inlinevolume-cw9j container test-container-subpath-inlinevolume-cw9j: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:38.042: INFO: Waiting for pod pod-subpath-test-inlinevolume-cw9j to disappear
Jun 22 22:18:38.080: INFO: Pod pod-subpath-test-inlinevolume-cw9j no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-cw9j
Jun 22 22:18:38.080: INFO: Deleting pod "pod-subpath-test-inlinevolume-cw9j" in namespace "provisioning-30"
[1mSTEP[0m: Deleting pod
Jun 22 22:18:38.117: INFO: Deleting pod "pod-subpath-test-inlinevolume-cw9j" in namespace "provisioning-30"
Jun 22 22:18:38.193: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-30" in namespace "provisioning-30" to be "Succeeded or Failed"
Jun 22 22:18:38.229: INFO: Pod "hostpath-symlink-prep-provisioning-30": Phase="Pending", Reason="", readiness=false. Elapsed: 36.442151ms
Jun 22 22:18:40.286: INFO: Pod "hostpath-symlink-prep-provisioning-30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093275438s
Jun 22 22:18:42.266: INFO: Pod "hostpath-symlink-prep-provisioning-30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073604333s
Jun 22 22:18:44.265: INFO: Pod "hostpath-symlink-prep-provisioning-30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072174909s
Jun 22 22:18:46.265: INFO: Pod "hostpath-symlink-prep-provisioning-30": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071961272s
Jun 22 22:18:48.266: INFO: Pod "hostpath-symlink-prep-provisioning-30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072682856s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:48.266: INFO: Pod "hostpath-symlink-prep-provisioning-30" satisfied condition "Succeeded or Failed"
Jun 22 22:18:48.266: INFO: Deleting pod "hostpath-symlink-prep-provisioning-30" in namespace "provisioning-30"
Jun 22 22:18:48.307: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-30" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
Jun 22 22:18:48.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-30" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":42,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:48.454: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 50 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/configmap_volume.go:77
[1mSTEP[0m: Creating configMap with name configmap-test-volume-bdefc0b4-0fe8-4cf2-9e02-f1109d6052ee
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 22:18:40.422: INFO: Waiting up to 5m0s for pod "pod-configmaps-0e0c0ed5-6ab3-4a2f-8028-b66bfd13013c" in namespace "configmap-6132" to be "Succeeded or Failed"
Jun 22 22:18:40.466: INFO: Pod "pod-configmaps-0e0c0ed5-6ab3-4a2f-8028-b66bfd13013c": Phase="Pending", Reason="", readiness=false. Elapsed: 43.957552ms
Jun 22 22:18:42.503: INFO: Pod "pod-configmaps-0e0c0ed5-6ab3-4a2f-8028-b66bfd13013c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080486436s
Jun 22 22:18:44.503: INFO: Pod "pod-configmaps-0e0c0ed5-6ab3-4a2f-8028-b66bfd13013c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081081128s
Jun 22 22:18:46.501: INFO: Pod "pod-configmaps-0e0c0ed5-6ab3-4a2f-8028-b66bfd13013c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078911603s
Jun 22 22:18:48.502: INFO: Pod "pod-configmaps-0e0c0ed5-6ab3-4a2f-8028-b66bfd13013c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079968216s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:48.502: INFO: Pod "pod-configmaps-0e0c0ed5-6ab3-4a2f-8028-b66bfd13013c" satisfied condition "Succeeded or Failed"
Jun 22 22:18:48.545: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod pod-configmaps-0e0c0ed5-6ab3-4a2f-8028-b66bfd13013c container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:48.625: INFO: Waiting for pod pod-configmaps-0e0c0ed5-6ab3-4a2f-8028-b66bfd13013c to disappear
Jun 22 22:18:48.661: INFO: Pod pod-configmaps-0e0c0ed5-6ab3-4a2f-8028-b66bfd13013c no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.674 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/configmap_volume.go:77[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":14,"skipped":139,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:48.760: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
[32m• [SLOW TEST:8.697 seconds][0m
[sig-storage] EmptyDir wrapper volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
should not conflict [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":14,"skipped":124,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:49.892: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
Jun 22 22:18:10.424: INFO: PersistentVolumeClaim pvc-fq2g2 found but phase is Pending instead of Bound.
Jun 22 22:18:12.461: INFO: PersistentVolumeClaim pvc-fq2g2 found and phase=Bound (8.17993499s)
Jun 22 22:18:12.461: INFO: Waiting up to 3m0s for PersistentVolume local-zc2zj to have phase Bound
Jun 22 22:18:12.495: INFO: PersistentVolume local-zc2zj found and phase=Bound (34.750771ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-gpdp
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 22:18:12.607: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gpdp" in namespace "provisioning-7036" to be "Succeeded or Failed"
Jun 22 22:18:12.642: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp": Phase="Pending", Reason="", readiness=false. Elapsed: 34.501374ms
Jun 22 22:18:14.678: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070789851s
Jun 22 22:18:16.678: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070442403s
Jun 22 22:18:18.677: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069738281s
Jun 22 22:18:20.677: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069717363s
Jun 22 22:18:22.686: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078286819s
... skipping 8 lines ...
Jun 22 22:18:40.682: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp": Phase="Running", Reason="", readiness=true. Elapsed: 28.074922604s
Jun 22 22:18:42.681: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp": Phase="Running", Reason="", readiness=true. Elapsed: 30.073531718s
Jun 22 22:18:44.682: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp": Phase="Running", Reason="", readiness=true. Elapsed: 32.074907395s
Jun 22 22:18:46.678: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp": Phase="Running", Reason="", readiness=true. Elapsed: 34.070828466s
Jun 22 22:18:48.678: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.070940501s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:48.678: INFO: Pod "pod-subpath-test-preprovisionedpv-gpdp" satisfied condition "Succeeded or Failed"
Jun 22 22:18:48.716: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-subpath-test-preprovisionedpv-gpdp container test-container-subpath-preprovisionedpv-gpdp: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:48.802: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gpdp to disappear
Jun 22 22:18:48.837: INFO: Pod pod-subpath-test-preprovisionedpv-gpdp no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-gpdp
Jun 22 22:18:48.837: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gpdp" in namespace "provisioning-7036"
... skipping 30 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":10,"skipped":74,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/projected_configmap.go:77
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-b952dabc-167f-4ddc-b5a0-d4d718f7df93
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 22:18:40.409: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e61d6e47-70f6-4825-9588-bedc2a687c9d" in namespace "projected-7501" to be "Succeeded or Failed"
Jun 22 22:18:40.453: INFO: Pod "pod-projected-configmaps-e61d6e47-70f6-4825-9588-bedc2a687c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 43.844694ms
Jun 22 22:18:42.490: INFO: Pod "pod-projected-configmaps-e61d6e47-70f6-4825-9588-bedc2a687c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080802784s
Jun 22 22:18:44.489: INFO: Pod "pod-projected-configmaps-e61d6e47-70f6-4825-9588-bedc2a687c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079906049s
Jun 22 22:18:46.488: INFO: Pod "pod-projected-configmaps-e61d6e47-70f6-4825-9588-bedc2a687c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079086513s
Jun 22 22:18:48.489: INFO: Pod "pod-projected-configmaps-e61d6e47-70f6-4825-9588-bedc2a687c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080759958s
Jun 22 22:18:50.489: INFO: Pod "pod-projected-configmaps-e61d6e47-70f6-4825-9588-bedc2a687c9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080575916s
[1mSTEP[0m: Saw pod success
Jun 22 22:18:50.489: INFO: Pod "pod-projected-configmaps-e61d6e47-70f6-4825-9588-bedc2a687c9d" satisfied condition "Succeeded or Failed"
Jun 22 22:18:50.527: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod pod-projected-configmaps-e61d6e47-70f6-4825-9588-bedc2a687c9d container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:18:50.624: INFO: Waiting for pod pod-projected-configmaps-e61d6e47-70f6-4825-9588-bedc2a687c9d to disappear
Jun 22 22:18:50.668: INFO: Pod pod-projected-configmaps-e61d6e47-70f6-4825-9588-bedc2a687c9d no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.730 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/projected_configmap.go:77[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":62,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 9 lines ...
Jun 22 22:17:39.139: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-9355 create -f -'
Jun 22 22:17:39.461: INFO: stderr: ""
Jun 22 22:17:39.461: INFO: stdout: "pod/httpd created\n"
Jun 22 22:17:39.461: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 22 22:17:39.461: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-9355" to be "running and ready"
Jun 22 22:17:39.497: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 35.453044ms
Jun 22 22:17:39.497: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:17:41.532: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071139097s
Jun 22 22:17:41.532: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:17:43.533: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071311857s
Jun 22 22:17:43.533: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:17:45.534: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072600117s
Jun 22 22:17:45.534: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:17:47.532: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070460565s
Jun 22 22:17:47.532: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:17:49.535: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073571718s
Jun 22 22:17:49.535: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-east1-b-3xs4' to be 'Running' but was 'Pending'
Jun 22 22:17:51.532: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.071128952s
Jun 22 22:17:51.533: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-east1-b-3xs4' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC }]
Jun 22 22:17:53.538: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.076938441s
Jun 22 22:17:53.538: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-east1-b-3xs4' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC }]
Jun 22 22:17:55.533: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.071900238s
Jun 22 22:17:55.533: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-east1-b-3xs4' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 22:17:39 +0000 UTC }]
Jun 22 22:17:57.534: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 18.072800726s
Jun 22 22:17:57.534: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 22 22:17:57.534: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support inline execution and attach
test/e2e/kubectl/kubectl.go:591
[1mSTEP[0m: executing a command with run and attach with stdin
... skipping 45 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Simple pod
[90mtest/e2e/kubectl/kubectl.go:407[0m
should support inline execution and attach
[90mtest/e2e/kubectl/kubectl.go:591[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":7,"skipped":53,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:51.285: INFO: Only supported for providers [azure] (not gce)
... skipping 31 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:51.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "discovery-7458" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Discovery should accurately determine present and missing resources","total":-1,"completed":8,"skipped":56,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 79 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and read from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:234[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":104,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] DNS
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 29 lines ...
[32m• [SLOW TEST:6.860 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should provide DNS for the cluster [Provider:GCE]
[90mtest/e2e/network/dns.go:70[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Provider:GCE]","total":-1,"completed":15,"skipped":141,"failed":0}
[BeforeEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:18:55.641: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename configmap
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
test/e2e/framework/framework.go:187
Jun 22 22:18:55.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "configmap-3637" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":16,"skipped":141,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:56.080: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 208 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and read from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:234[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":10,"skipped":53,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:18:57.238: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 53 lines ...
[32m• [SLOW TEST:10.282 seconds][0m
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
works for CRD preserving unknown fields in an embedded object [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":12,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:18:35.069: INFO: >>> kubeConfig: /root/.kube/config
... skipping 27 lines ...
Jun 22 22:18:54.096: INFO: PersistentVolumeClaim pvc-gmf56 found but phase is Pending instead of Bound.
Jun 22 22:18:56.133: INFO: PersistentVolumeClaim pvc-gmf56 found and phase=Bound (10.223737993s)
Jun 22 22:18:56.133: INFO: Waiting up to 3m0s for PersistentVolume local-xfgcx to have phase Bound
Jun 22 22:18:56.176: INFO: PersistentVolume local-xfgcx found and phase=Bound (42.88192ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-d7zz
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:18:56.289: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-d7zz" in namespace "provisioning-7484" to be "Succeeded or Failed"
Jun 22 22:18:56.324: INFO: Pod "pod-subpath-test-preprovisionedpv-d7zz": Phase="Pending", Reason="", readiness=false. Elapsed: 35.240638ms
Jun 22 22:18:58.359: INFO: Pod "pod-subpath-test-preprovisionedpv-d7zz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070428603s
Jun 22 22:19:00.359: INFO: Pod "pod-subpath-test-preprovisionedpv-d7zz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070812407s
[1mSTEP[0m: Saw pod success
Jun 22 22:19:00.360: INFO: Pod "pod-subpath-test-preprovisionedpv-d7zz" satisfied condition "Succeeded or Failed"
Jun 22 22:19:00.404: INFO: Trying to get logs from node nodes-us-east1-b-vf6p pod pod-subpath-test-preprovisionedpv-d7zz container test-container-subpath-preprovisionedpv-d7zz: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:19:00.483: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-d7zz to disappear
Jun 22 22:19:00.518: INFO: Pod pod-subpath-test-preprovisionedpv-d7zz no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-d7zz
Jun 22 22:19:00.519: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-d7zz" in namespace "provisioning-7484"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":13,"skipped":101,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ResourceQuota
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 51 lines ...
[36mOnly supported for node OS distro [gci ubuntu] (not debian)[0m
test/e2e/framework/skipper/skipper.go:301
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":15,"skipped":129,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:01.508: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 92 lines ...
[90mtest/e2e/common/node/runtime.go:43[0m
on terminated container
[90mtest/e2e/common/node/runtime.go:136[0m
should report termination message if TerminationMessagePath is set [NodeConformance]
[90mtest/e2e/common/node/runtime.go:173[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":17,"skipped":159,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:01.939: INFO: Only supported for providers [vsphere] (not gce)
... skipping 113 lines ...
[1mSTEP[0m: Destroying namespace "services-4250" for this suite.
[AfterEach] [sig-network] Services
test/e2e/network/service.go:762
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":14,"skipped":123,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":6,"skipped":55,"failed":0}
[BeforeEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:18:58.800: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename projected
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating projection with secret that has name projected-secret-test-7e9d9fb3-afa4-4502-abdf-ff0980c44237
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 22:18:59.153: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e50b771e-914a-49bd-84e5-c60be391308a" in namespace "projected-7610" to be "Succeeded or Failed"
Jun 22 22:18:59.190: INFO: Pod "pod-projected-secrets-e50b771e-914a-49bd-84e5-c60be391308a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.795926ms
Jun 22 22:19:01.230: INFO: Pod "pod-projected-secrets-e50b771e-914a-49bd-84e5-c60be391308a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077177638s
Jun 22 22:19:03.225: INFO: Pod "pod-projected-secrets-e50b771e-914a-49bd-84e5-c60be391308a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072382915s
[1mSTEP[0m: Saw pod success
Jun 22 22:19:03.225: INFO: Pod "pod-projected-secrets-e50b771e-914a-49bd-84e5-c60be391308a" satisfied condition "Succeeded or Failed"
Jun 22 22:19:03.263: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod pod-projected-secrets-e50b771e-914a-49bd-84e5-c60be391308a container projected-secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:19:03.349: INFO: Waiting for pod pod-projected-secrets-e50b771e-914a-49bd-84e5-c60be391308a to disappear
Jun 22 22:19:03.386: INFO: Pod pod-projected-secrets-e50b771e-914a-49bd-84e5-c60be391308a no longer exists
[AfterEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:187
Jun 22 22:19:03.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "projected-7610" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":55,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:03.542: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
... skipping 58 lines ...
Jun 22 22:18:45.479: INFO: ExecWithOptions: Clientset creation
Jun 22 22:18:45.480: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/sctp-192/pods/hostexec-nodes-us-east1-b-vf6p-kbdhm/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 22 22:18:45.765: INFO: exec nodes-us-east1-b-vf6p: command: lsmod | grep sctp
Jun 22 22:18:45.765: INFO: exec nodes-us-east1-b-vf6p: stdout: ""
Jun 22 22:18:45.765: INFO: exec nodes-us-east1-b-vf6p: stderr: ""
Jun 22 22:18:45.765: INFO: exec nodes-us-east1-b-vf6p: exit code: 0
Jun 22 22:18:45.765: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Jun 22 22:18:45.765: INFO: the sctp module is not loaded on node: nodes-us-east1-b-vf6p
Jun 22 22:18:45.765: INFO: Executing cmd "lsmod | grep sctp" on node nodes-us-east1-b-3xs4
Jun 22 22:18:45.806: INFO: Waiting up to 5m0s for pod "hostexec-nodes-us-east1-b-3xs4-8wnjl" in namespace "sctp-192" to be "running"
Jun 22 22:18:45.841: INFO: Pod "hostexec-nodes-us-east1-b-3xs4-8wnjl": Phase="Pending", Reason="", readiness=false. Elapsed: 34.424851ms
Jun 22 22:18:47.877: INFO: Pod "hostexec-nodes-us-east1-b-3xs4-8wnjl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071072143s
Jun 22 22:18:49.882: INFO: Pod "hostexec-nodes-us-east1-b-3xs4-8wnjl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076229627s
... skipping 4 lines ...
Jun 22 22:18:51.879: INFO: ExecWithOptions: Clientset creation
Jun 22 22:18:51.879: INFO: ExecWithOptions: execute(POST https://34.138.125.141/api/v1/namespaces/sctp-192/pods/hostexec-nodes-us-east1-b-3xs4-8wnjl/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 22 22:18:52.153: INFO: exec nodes-us-east1-b-3xs4: command: lsmod | grep sctp
Jun 22 22:18:52.153: INFO: exec nodes-us-east1-b-3xs4: stdout: ""
Jun 22 22:18:52.153: INFO: exec nodes-us-east1-b-3xs4: stderr: ""
Jun 22 22:18:52.153: INFO: exec nodes-us-east1-b-3xs4: exit code: 0
Jun 22 22:18:52.153: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Jun 22 22:18:52.153: INFO: the sctp module is not loaded on node: nodes-us-east1-b-3xs4
[1mSTEP[0m: Deleting pod hostexec-nodes-us-east1-b-vf6p-kbdhm in namespace sctp-192
[1mSTEP[0m: Deleting pod hostexec-nodes-us-east1-b-3xs4-8wnjl in namespace sctp-192
[1mSTEP[0m: creating service sctp-clusterip in namespace sctp-192
Jun 22 22:18:52.318: INFO: Service sctp-clusterip in namespace sctp-192 found.
Jun 22 22:18:52.318: INFO: Executing cmd "iptables-save" on node nodes-us-east1-b-vf6p
... skipping 40 lines ...
[32m• [SLOW TEST:33.521 seconds][0m
[sig-network] SCTP [LinuxOnly]
[90mtest/e2e/network/common/framework.go:23[0m
should create a ClusterIP Service with SCTP ports
[90mtest/e2e/network/service.go:4178[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports","total":-1,"completed":7,"skipped":92,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:03.780: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 24 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium
Jun 22 22:18:57.564: INFO: Waiting up to 5m0s for pod "pod-c5b81bff-02f4-4feb-8b43-7b9bbd393929" in namespace "emptydir-8391" to be "Succeeded or Failed"
Jun 22 22:18:57.597: INFO: Pod "pod-c5b81bff-02f4-4feb-8b43-7b9bbd393929": Phase="Pending", Reason="", readiness=false. Elapsed: 32.93834ms
Jun 22 22:18:59.632: INFO: Pod "pod-c5b81bff-02f4-4feb-8b43-7b9bbd393929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068022735s
Jun 22 22:19:01.632: INFO: Pod "pod-c5b81bff-02f4-4feb-8b43-7b9bbd393929": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067556886s
Jun 22 22:19:03.636: INFO: Pod "pod-c5b81bff-02f4-4feb-8b43-7b9bbd393929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071522986s
[1mSTEP[0m: Saw pod success
Jun 22 22:19:03.636: INFO: Pod "pod-c5b81bff-02f4-4feb-8b43-7b9bbd393929" satisfied condition "Succeeded or Failed"
Jun 22 22:19:03.672: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-c5b81bff-02f4-4feb-8b43-7b9bbd393929 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:19:03.760: INFO: Waiting for pod pod-c5b81bff-02f4-4feb-8b43-7b9bbd393929 to disappear
Jun 22 22:19:03.803: INFO: Pod pod-c5b81bff-02f4-4feb-8b43-7b9bbd393929 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.605 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:03.887: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 11 lines ...
[36mDriver emptydir doesn't support GenericEphemeralVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":8,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:18:45.384: INFO: >>> kubeConfig: /root/.kube/config
... skipping 22 lines ...
Jun 22 22:18:54.199: INFO: PersistentVolumeClaim pvc-nbmmg found but phase is Pending instead of Bound.
Jun 22 22:18:56.236: INFO: PersistentVolumeClaim pvc-nbmmg found and phase=Bound (4.109686786s)
Jun 22 22:18:56.236: INFO: Waiting up to 3m0s for PersistentVolume local-bc479 to have phase Bound
Jun 22 22:18:56.273: INFO: PersistentVolume local-bc479 found and phase=Bound (36.909429ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-c9zt
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 22 22:18:56.395: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-c9zt" in namespace "volume-6217" to be "Succeeded or Failed"
Jun 22 22:18:56.431: INFO: Pod "exec-volume-test-preprovisionedpv-c9zt": Phase="Pending", Reason="", readiness=false. Elapsed: 35.756405ms
Jun 22 22:18:58.466: INFO: Pod "exec-volume-test-preprovisionedpv-c9zt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070573497s
Jun 22 22:19:00.468: INFO: Pod "exec-volume-test-preprovisionedpv-c9zt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072613448s
Jun 22 22:19:02.466: INFO: Pod "exec-volume-test-preprovisionedpv-c9zt": Phase="Running", Reason="", readiness=true. Elapsed: 6.070882194s
Jun 22 22:19:04.467: INFO: Pod "exec-volume-test-preprovisionedpv-c9zt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072291855s
[1mSTEP[0m: Saw pod success
Jun 22 22:19:04.468: INFO: Pod "exec-volume-test-preprovisionedpv-c9zt" satisfied condition "Succeeded or Failed"
Jun 22 22:19:04.508: INFO: Trying to get logs from node nodes-us-east1-b-t83b pod exec-volume-test-preprovisionedpv-c9zt container exec-container-preprovisionedpv-c9zt: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:19:04.586: INFO: Waiting for pod exec-volume-test-preprovisionedpv-c9zt to disappear
Jun 22 22:19:04.621: INFO: Pod exec-volume-test-preprovisionedpv-c9zt no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-c9zt
Jun 22 22:19:04.621: INFO: Deleting pod "exec-volume-test-preprovisionedpv-c9zt" in namespace "volume-6217"
... skipping 117 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Update Demo
[90mtest/e2e/kubectl/kubectl.go:322[0m
should create and stop a replication controller [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":13,"skipped":105,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:07.332: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 43 lines ...
[1mSTEP[0m: Destroying namespace "webhook-6349-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/webhook.go:104
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":8,"skipped":70,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:08.312: INFO: Only supported for providers [aws] (not gce)
... skipping 149 lines ...
Jun 22 22:18:55.557: INFO: PersistentVolumeClaim pvc-bswrm found but phase is Pending instead of Bound.
Jun 22 22:18:57.593: INFO: PersistentVolumeClaim pvc-bswrm found and phase=Bound (2.091253806s)
Jun 22 22:18:57.593: INFO: Waiting up to 3m0s for PersistentVolume local-8gqsb to have phase Bound
Jun 22 22:18:57.628: INFO: PersistentVolume local-8gqsb found and phase=Bound (35.298688ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-mwvs
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:18:57.737: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mwvs" in namespace "provisioning-4392" to be "Succeeded or Failed"
Jun 22 22:18:57.772: INFO: Pod "pod-subpath-test-preprovisionedpv-mwvs": Phase="Pending", Reason="", readiness=false. Elapsed: 34.896825ms
Jun 22 22:18:59.813: INFO: Pod "pod-subpath-test-preprovisionedpv-mwvs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075745195s
Jun 22 22:19:01.809: INFO: Pod "pod-subpath-test-preprovisionedpv-mwvs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072211049s
Jun 22 22:19:03.810: INFO: Pod "pod-subpath-test-preprovisionedpv-mwvs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072846s
Jun 22 22:19:05.830: INFO: Pod "pod-subpath-test-preprovisionedpv-mwvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093329601s
[1mSTEP[0m: Saw pod success
Jun 22 22:19:05.830: INFO: Pod "pod-subpath-test-preprovisionedpv-mwvs" satisfied condition "Succeeded or Failed"
Jun 22 22:19:05.930: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod pod-subpath-test-preprovisionedpv-mwvs container test-container-subpath-preprovisionedpv-mwvs: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:19:06.091: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mwvs to disappear
Jun 22 22:19:06.130: INFO: Pod pod-subpath-test-preprovisionedpv-mwvs no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-mwvs
Jun 22 22:19:06.130: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mwvs" in namespace "provisioning-4392"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-mwvs
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 22:19:06.207: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mwvs" in namespace "provisioning-4392" to be "Succeeded or Failed"
Jun 22 22:19:06.259: INFO: Pod "pod-subpath-test-preprovisionedpv-mwvs": Phase="Pending", Reason="", readiness=false. Elapsed: 51.272615ms
Jun 22 22:19:08.297: INFO: Pod "pod-subpath-test-preprovisionedpv-mwvs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089992151s
Jun 22 22:19:10.296: INFO: Pod "pod-subpath-test-preprovisionedpv-mwvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088510871s
[1mSTEP[0m: Saw pod success
Jun 22 22:19:10.296: INFO: Pod "pod-subpath-test-preprovisionedpv-mwvs" satisfied condition "Succeeded or Failed"
Jun 22 22:19:10.332: INFO: Trying to get logs from node nodes-us-east1-b-vgn6 pod pod-subpath-test-preprovisionedpv-mwvs container test-container-subpath-preprovisionedpv-mwvs: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:19:10.424: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mwvs to disappear
Jun 22 22:19:10.459: INFO: Pod pod-subpath-test-preprovisionedpv-mwvs no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-mwvs
Jun 22 22:19:10.459: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mwvs" in namespace "provisioning-4392"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":17,"skipped":152,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:11.085: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: hostPathSymlink]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":64,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 22:19:05.192: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubectl
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
Jun 22 22:19:11.058: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
Jun 22 22:19:11.058: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-2629 describe pod agnhost-primary-fvrjs'
Jun 22 22:19:11.311: INFO: stderr: ""
Jun 22 22:19:11.311: INFO: stdout: "Name: agnhost-primary-fvrjs\nNamespace: kubectl-2629\nPriority: 0\nNode: nodes-us-east1-b-t83b/10.0.16.3\nStart Time: Wed, 22 Jun 2022 22:19:06 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 100.96.4.163\nIPs:\n IP: 100.96.4.163\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://32660c51707a0ccd8b5ff8f1cb5a42e51099d4884ef661432d28e04d5fdcb800\n Image: registry.k8s.io/e2e-test-images/agnhost:2.39\n Image ID: registry.k8s.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 22 Jun 2022 22:19:07 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8nlts (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-8nlts:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-2629/agnhost-primary-fvrjs to nodes-us-east1-b-t83b\n Normal Pulled 4s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 4s kubelet Created container agnhost-primary\n Normal Started 4s kubelet Started container agnhost-primary\n"
Jun 22 22:19:11.311: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-2629 describe rc agnhost-primary'
Jun 22 22:19:11.611: INFO: stderr: ""
Jun 22 22:19:11.611: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2629\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.39\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-primary-fvrjs\n"
Jun 22 22:19:11.611: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-2629 describe service agnhost-primary'
Jun 22 22:19:11.902: INFO: stderr: ""
Jun 22 22:19:11.902: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2629\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 100.67.84.245\nIPs: 100.67.84.245\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 100.96.4.163:6379\nSession Affinity: None\nEvents: <none>\n"
Jun 22 22:19:11.950: INFO: Running '/logs/artifacts/2e075437-f277-11ec-8dfe-daa417708791/kubectl --server=https://34.138.125.141 --kubeconfig=/root/.kube/config --namespace=kubectl-2629 describe node master-us-east1-b-xt3x'
Jun 22 22:19:12.434: INFO: stderr: ""
Jun 22 22:19:12.434: INFO: stdout: "Name: master-us-east1-b-xt3x\nRoles: control-plane\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=e2-standard-2\n beta.kubernetes.io/os=linux\n cloud.google.com/metadata-proxy-ready=true\n failure-domain.beta.kubernetes.io/region=us-east1\n failure-domain.beta.kubernetes.io/zone=us-east1-b\n kops.k8s.io/instancegroup=master-us-east1-b\n kops.k8s.io/kops-controller-pki=\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master-us-east1-b-xt3x\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node.kubernetes.io/exclude-from-external-load-balancers=\n node.kubernetes.io/instance-type=e2-standard-2\n topology.gke.io/zone=us-east1-b\n topology.kubernetes.io/region=us-east1\n topology.kubernetes.io/zone=us-east1-b\nAnnotations: csi.volume.kubernetes.io/nodeid:\n {\"pd.csi.storage.gke.io\":\"projects/k8s-jkns-e2e-kubeadm-gce-ci/zones/us-east1-b/instances/master-us-east1-b-xt3x\"}\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 22 Jun 2022 22:09:40 +0000\nTaints: node-role.kubernetes.io/control-plane:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master-us-east1-b-xt3x\n AcquireTime: <unset>\n RenewTime: Wed, 22 Jun 2022 22:19:10 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Wed, 22 Jun 2022 22:10:47 +0000 Wed, 22 Jun 2022 22:10:47 +0000 RouteCreated RouteController created a route\n MemoryPressure False Wed, 22 Jun 2022 22:16:29 +0000 Wed, 22 Jun 2022 22:09:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 22 Jun 2022 22:16:29 +0000 Wed, 22 Jun 2022 22:09:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 22 Jun 2022 22:16:29 +0000 Wed, 22 Jun 2022 22:09:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 22 Jun 2022 22:16:29 +0000 Wed, 22 Jun 2022 22:10:10 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.0.16.2\n ExternalIP: 34.75.233.73\n Hostname: master-us-east1-b-xt3x\nCapacity:\n cpu: 2\n ephemeral-storage: 48600704Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 8145396Ki\n pods: 110\nAllocatable:\n cpu: 2\n ephemeral-storage: 44790408733\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 8042996Ki\n pods: 110\nSystem Info:\n Machine ID: 765dd14fb730b123b3631d5f90b71bbd\n System UUID: 765dd14f-b730-b123-b363-1d5f90b71bbd\n Boot ID: 2876f22c-2036-4c4a-932c-2ac5ab7b1213\n Kernel Version: 5.11.0-1028-gcp\n OS Image: Ubuntu 20.04.3 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.6\n Kubelet Version: v1.25.0-alpha.1\n Kube-Proxy Version: v1.25.0-alpha.1\nPodCIDR: 100.96.0.0/24\nPodCIDRs: 100.96.0.0/24\nProviderID: gce://k8s-jkns-e2e-kubeadm-gce-ci/us-east1-b/master-us-east1-b-xt3x\nNon-terminated Pods: (12 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n gce-pd-csi-driver csi-gce-pd-controller-9f559494d-w59sn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m1s\n gce-pd-csi-driver csi-gce-pd-node-xdpk5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m1s\n kube-system cloud-controller-manager-qqhb7 200m (10%) 0 (0%) 0 (0%) 0 (0%) 9m1s\n kube-system dns-controller-78bc9bdd66-c5tpp 50m (2%) 0 (0%) 50Mi (0%) 0 (0%) 9m1s\n kube-system etcd-manager-events-master-us-east1-b-xt3x 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m28s\n kube-system etcd-manager-main-master-us-east1-b-xt3x 200m (10%) 0 (0%) 100Mi (1%) 0 (0%) 8m56s\n kube-system kops-controller-rdhz4 50m (2%) 0 (0%) 50Mi (0%) 0 (0%) 9m1s\n kube-system kube-apiserver-master-us-east1-b-xt3x 150m (7%) 0 (0%) 0 (0%) 0 (0%) 8m27s\n kube-system kube-controller-manager-master-us-east1-b-xt3x 100m (5%) 0 (0%) 0 (0%) 0 (0%) 9m30s\n kube-system kube-proxy-master-us-east1-b-xt3x 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m45s\n kube-system kube-scheduler-master-us-east1-b-xt3x 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m50s\n kube-system metadata-proxy-v0.12-pcv9n 32m (1%) 32m (1%) 45Mi (0%) 45Mi (0%) 8m32s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1082m (54%) 32m (1%)\n memory 345Mi (4%) 45Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 9m15s kube-proxy \n Warning InvalidDiskCapacity 10m kubelet invalid capacity 0 on image filesystem\n Normal Starting 10m kubelet Starting kubelet.\n Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods\n Normal NodeHasNoDiskPressure 10m (x7 over 10m) kubelet Node master-us-east1-b-xt3x status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 10m (x7 over 10m) kubelet Node master-us-east1-b-xt3x status is now: NodeHasSufficientPID\n Normal NodeHasSufficientMemory 10m (x8 over 10m) kubelet Node master-us-east1-b-xt3x status is now: NodeHasSufficientMemory\n Normal RegisteredNode 9m2s node-controller Node master-us-east1-b-xt3x event: Registered Node master-us-east1-b-xt3x in Controller\n Normal Synced 8m33s (x3 over 8m33s) cloud-node-controller Node synced successfully\n Normal CIDRNotAvailable 7m55s (x10 over 8m33s) cidrAllocator Node master-us-east1-b-xt3x status is now: CIDRNotAvailable\n"
... skipping 11 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl describe
[90mtest/e2e/kubectl/kubectl.go:1259[0m
should check if kubectl describe prints relevant information for rc and pods [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":10,"skipped":64,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:12.828: INFO: Only supported for providers [openstack] (not gce)
... skipping 210 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should create read/write inline ephemeral volume
[90mtest/e2e/storage/testsuites/ephemeral.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":9,"skipped":65,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:14.591: INFO: Only supported for providers [aws] (not gce)
... skipping 244 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: dir]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 135 lines ...
Jun 22 22:18:49.822: INFO: Pod "pvc-volume-tester-sh6j2" satisfied condition "running"
[1mSTEP[0m: Deleting the previously created pod
Jun 22 22:18:54.824: INFO: Deleting pod "pvc-volume-tester-sh6j2" in namespace "csi-mock-volumes-9560"
Jun 22 22:18:54.862: INFO: Wait up to 5m0s for pod "pvc-volume-tester-sh6j2" to be fully deleted
[1mSTEP[0m: Checking CSI driver logs
Jun 22 22:18:58.977: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IkxreEkxUjVwMlQ5UzVobkdzZXNRUFJoSWEtSzgtT0FwNFQzX2tiS0lhTTAifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2NTU5MzY5MTksImlhdCI6MTY1NTkzNjMxOSwiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLWUyZS1rb3BzLWdjZS1zdGFibGUuazhzLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjc2ktbW9jay12b2x1bWVzLTk1NjAiLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLXNoNmoyIiwidWlkIjoiZTlhMDY4YzMtYzgzNS00MjRhLTlkZGQtMjAxYjVkNTM0ZTY4In0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiYjE1YzJjYjctMTY1OS00MWViLTg2OTQtZGQyMTRkMzczNTViIn19LCJuYmYiOjE2NTU5MzYzMTksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTk1NjA6ZGVmYXVsdCJ9.H7RF3Q4yREm8P_57bnz21TjKUnI4iS93SNJmX1fTSo2gEuCwRL0TmsHBzkVb9ZM6ou9z-Frr5UIzo_aFQJfPkTS6109rpYEz6MZuCCCzVzyiFuzoHEkDCTR_BjFEnEGGRwognDJr6fs08E4Xj_XGMF7Rs6I3LMYxvs3w8zYwQI0-hhu5lDXlygY7mZHZwummbtpFr8FjX0Qdw8HjGI-2ytn8u5R5cSaMgqsSXT3Ip48pHzIWeFCK01SiyhpsBV3EuTryz9a6XQjymjxM8VIYcl3k9Xz1aBKCog74LEG3ajahZRaDDCUSZGpSEplSqe56bs8UhGO77aO6AaRUpkbwIQ","expirationTimestamp":"2022-06-22T22:28:39Z"}}
Jun 22 22:18:58.977: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"3d534120-f279-11ec-a524-4e1e2d6ed168","target_path":"/var/lib/kubelet/pods/e9a068c3-c835-424a-9ddd-201b5d534e68/volumes/kubernetes.io~csi/pvc-91a62bf2-7f04-421b-b1fe-77fafbe1024c/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
[1mSTEP[0m: Deleting pod pvc-volume-tester-sh6j2
Jun 22 22:18:58.977: INFO: Deleting pod "pvc-volume-tester-sh6j2" in namespace "csi-mock-volumes-9560"
[1mSTEP[0m: Deleting claim pvc-l2dtf
Jun 22 22:18:59.089: INFO: Waiting up to 2m0s for PersistentVolume pvc-91a62bf2-7f04-421b-b1fe-77fafbe1024c to get deleted
Jun 22 22:18:59.127: INFO: PersistentVolume pvc-91a62bf2-7f04-421b-b1fe-77fafbe1024c found and phase=Released (37.888743ms)
Jun 22 22:19:01.163: INFO: PersistentVolume pvc-91a62bf2-7f04-421b-b1fe-77fafbe1024c was removed
... skipping 45 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSIServiceAccountToken
[90mtest/e2e/storage/csi_mock_volume.go:1574[0m
token should be plumbed down when csiServiceAccountTokenEnabled=true
[90mtest/e2e/storage/csi_mock_volume.go:1602[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":6,"skipped":46,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:14.949: INFO: Only supported for providers [aws] (not gce)
... skipping 59 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when create a pod with lifecycle hook
[90mtest/e2e/common/node/lifecycle_hook.go:46[0m
should execute poststart http hook properly [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":108,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:16.314: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 20 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-test-volume-3977bf53-3e2f-4ad6-9315-e704244b6f14
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 22:19:10.332: INFO: Waiting up to 5m0s for pod "pod-configmaps-f91e79c9-7801-4c0d-9384-c14a3ca130d4" in namespace "configmap-9809" to be "Succeeded or Failed"
Jun 22 22:19:10.369: INFO: Pod "pod-configmaps-f91e79c9-7801-4c0d-9384-c14a3ca130d4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.536545ms
Jun 22 22:19:12.405: INFO: Pod "pod-configmaps-f91e79c9-7801-4c0d-9384-c14a3ca130d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073249917s
Jun 22 22:19:14.407: INFO: Pod "pod-configmaps-f91e79c9-7801-4c0d-9384-c14a3ca130d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074808149s
Jun 22 22:19:16.407: INFO: Pod "pod-configmaps-f91e79c9-7801-4c0d-9384-c14a3ca130d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075234149s
[1mSTEP[0m: Saw pod success
Jun 22 22:19:16.407: INFO: Pod "pod-configmaps-f91e79c9-7801-4c0d-9384-c14a3ca130d4" satisfied condition "Succeeded or Failed"
Jun 22 22:19:16.442: INFO: Trying to get logs from node nodes-us-east1-b-3xs4 pod pod-configmaps-f91e79c9-7801-4c0d-9384-c14a3ca130d4 container configmap-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 22:19:16.521: INFO: Waiting for pod pod-configmaps-f91e79c9-7801-4c0d-9384-c14a3ca130d4 to disappear
Jun 22 22:19:16.555: INFO: Pod pod-configmaps-f91e79c9-7801-4c0d-9384-c14a3ca130d4 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.624 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":88,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 55 lines ...
[32m• [SLOW TEST:19.480 seconds][0m
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
works for multiple CRDs of different groups [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":18,"skipped":168,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 22:19:21.505: INFO: Only supported for providers [azure] (not gce)
... skipping 45127 lines ...
es rules\"\nI0622 22:17:15.433042 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=47\nI0622 22:17:15.441630 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"81.125446ms\"\nI0622 22:17:16.975467 10 service.go:322] \"Service updated ports\" service=\"services-2806/svc-not-tolerate-unready\" portCount=0\nI0622 22:17:16.975521 10 service.go:462] \"Removing service port\" portName=\"services-2806/svc-not-tolerate-unready:http\"\nI0622 22:17:16.975555 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:17.015719 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=45\nI0622 22:17:17.024169 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.646507ms\"\nI0622 22:17:18.024399 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:18.079098 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=45\nI0622 22:17:18.094567 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.239769ms\"\nI0622 22:17:31.046304 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:31.092950 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=43\nI0622 22:17:31.104319 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.071574ms\"\nI0622 22:17:31.104426 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:31.152704 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=18 numNATRules=37\nI0622 22:17:31.158927 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.558622ms\"\nI0622 22:17:31.175331 10 service.go:322] \"Service updated ports\" service=\"services-1506/nodeport-test\" portCount=0\nI0622 22:17:32.159169 10 service.go:462] \"Removing service port\" portName=\"services-1506/nodeport-test:http\"\nI0622 22:17:32.159251 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:32.218362 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:17:32.224058 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.973049ms\"\nI0622 22:17:32.911135 10 service.go:322] \"Service updated ports\" service=\"conntrack-8655/boom-server\" portCount=1\nI0622 22:17:33.225007 10 service.go:437] \"Adding new service port\" portName=\"conntrack-8655/boom-server\" servicePort=\"100.66.148.45:9000/TCP\"\nI0622 22:17:33.225102 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:33.269876 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:17:33.275464 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.49671ms\"\nI0622 22:17:43.963945 10 service.go:322] \"Service updated ports\" service=\"services-4885/multi-endpoint-test\" portCount=2\nI0622 22:17:43.964004 10 service.go:437] \"Adding new service port\" portName=\"services-4885/multi-endpoint-test:portname1\" servicePort=\"100.66.197.16:80/TCP\"\nI0622 22:17:43.964019 10 service.go:437] \"Adding new service port\" portName=\"services-4885/multi-endpoint-test:portname2\" servicePort=\"100.66.197.16:81/TCP\"\nI0622 22:17:43.964054 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:43.998671 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 22:17:44.003943 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.947833ms\"\nI0622 22:17:44.004009 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:44.046977 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 22:17:44.059609 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.620955ms\"\nI0622 22:17:48.573616 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:48.609652 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=44\nI0622 22:17:48.615200 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.63132ms\"\nI0622 22:18:01.278404 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:01.317513 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=49\nI0622 22:18:01.325846 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.508084ms\"\nI0622 22:18:05.112127 10 service.go:322] \"Service updated ports\" service=\"funny-ips-1305/funny-ip\" portCount=1\nI0622 22:18:05.112194 10 service.go:437] \"Adding new service port\" portName=\"funny-ips-1305/funny-ip:http\" servicePort=\"100.66.148.11:7180/TCP\"\nI0622 22:18:05.112234 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:05.158603 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=49\nI0622 22:18:05.164805 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.617429ms\"\nI0622 22:18:05.164892 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:05.203816 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=49\nI0622 22:18:05.209231 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.362139ms\"\nI0622 22:18:07.869483 10 service.go:322] \"Service updated ports\" service=\"services-3074/externalip-test\" portCount=1\nI0622 22:18:07.869569 10 service.go:437] \"Adding new service port\" portName=\"services-3074/externalip-test:http\" servicePort=\"100.68.15.78:80/TCP\"\nI0622 22:18:07.869607 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:07.917295 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=49\nI0622 22:18:07.923279 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.71476ms\"\nI0622 22:18:07.923470 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:07.968913 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=49\nI0622 22:18:07.976585 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.126297ms\"\nI0622 22:18:09.075511 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:09.111791 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=54\nI0622 22:18:09.117279 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.821723ms\"\nI0622 22:18:10.117556 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:10.161463 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=62\nI0622 22:18:10.166955 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.477461ms\"\nI0622 22:18:11.687542 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:11.723819 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=59\nI0622 22:18:11.729271 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.802563ms\"\nI0622 22:18:12.695801 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:12.751069 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=57\nI0622 22:18:12.757543 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.808707ms\"\nI0622 22:18:12.973514 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:13.032587 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=54\nI0622 22:18:13.041967 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.511797ms\"\nI0622 22:18:13.108702 10 service.go:322] \"Service updated ports\" service=\"services-4885/multi-endpoint-test\" portCount=0\nI0622 22:18:14.042207 10 service.go:462] \"Removing service port\" portName=\"services-4885/multi-endpoint-test:portname1\"\nI0622 22:18:14.042241 10 service.go:462] \"Removing service port\" portName=\"services-4885/multi-endpoint-test:portname2\"\nI0622 22:18:14.042307 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:14.093905 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=52\nI0622 22:18:14.099905 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.730972ms\"\nI0622 22:18:14.540471 10 service.go:322] \"Service updated ports\" service=\"services-8248/externalname-service\" portCount=1\nI0622 22:18:15.100262 10 service.go:437] \"Adding new service port\" portName=\"services-8248/externalname-service:http\" servicePort=\"100.69.162.246:80/TCP\"\nI0622 22:18:15.100332 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:15.135756 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=52\nI0622 22:18:15.142271 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.063902ms\"\nI0622 22:18:17.871209 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:17.917308 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=57\nI0622 22:18:17.923670 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.518294ms\"\nI0622 22:18:18.238197 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:18.285555 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=60\nI0622 22:18:18.294927 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.79756ms\"\nI0622 22:18:20.899341 10 service.go:322] \"Service updated ports\" service=\"funny-ips-1305/funny-ip\" portCount=0\nI0622 22:18:20.899396 10 service.go:462] \"Removing service port\" portName=\"funny-ips-1305/funny-ip:http\"\nI0622 22:18:20.899439 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:20.934407 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=57\nI0622 22:18:20.939808 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.411197ms\"\nI0622 22:18:20.939990 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:20.973196 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=55\nI0622 22:18:20.978314 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.463157ms\"\nI0622 22:18:25.631632 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:25.665980 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=58\nI0622 22:18:25.671106 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.580494ms\"\nI0622 22:18:28.384287 10 service.go:322] \"Service updated ports\" service=\"webhook-6881/e2e-test-webhook\" portCount=1\nI0622 22:18:28.384350 10 service.go:437] \"Adding new service port\" portName=\"webhook-6881/e2e-test-webhook\" servicePort=\"100.69.148.171:8443/TCP\"\nI0622 22:18:28.384388 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:28.433406 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:18:28.440548 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.204291ms\"\nI0622 22:18:28.440784 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:28.488326 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=63\nI0622 22:18:28.494461 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.740653ms\"\nI0622 22:18:29.983134 10 service.go:322] \"Service updated ports\" service=\"webhook-6881/e2e-test-webhook\" portCount=0\nI0622 22:18:29.983192 10 service.go:462] \"Removing service port\" portName=\"webhook-6881/e2e-test-webhook\"\nI0622 22:18:29.983238 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:30.052392 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=60\nI0622 22:18:30.060548 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"77.349029ms\"\nI0622 22:18:30.770533 10 service.go:322] \"Service updated ports\" service=\"services-8248/externalname-service\" portCount=0\nI0622 22:18:30.770577 10 service.go:462] \"Removing service port\" portName=\"services-8248/externalname-service:http\"\nI0622 22:18:30.770634 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:30.821706 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=53\nI0622 22:18:30.841382 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.798484ms\"\nI0622 22:18:31.842469 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:31.879186 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 22:18:31.884154 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.807116ms\"\nI0622 22:18:33.889230 10 service.go:322] \"Service updated ports\" service=\"kubectl-7799/agnhost-primary\" portCount=1\nI0622 22:18:33.889283 10 service.go:437] \"Adding new service port\" portName=\"kubectl-7799/agnhost-primary\" servicePort=\"100.68.252.181:6379/TCP\"\nI0622 22:18:33.889323 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:33.946482 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=50\nI0622 22:18:33.957848 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.565183ms\"\nI0622 22:18:33.957922 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:34.035624 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=50\nI0622 22:18:34.041874 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"83.981447ms\"\nI0622 22:18:39.599423 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:39.628612 10 service.go:322] \"Service updated ports\" service=\"services-3074/externalip-test\" portCount=0\nI0622 22:18:39.636483 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=43\nI0622 22:18:39.642355 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.004245ms\"\nI0622 22:18:39.642403 10 service.go:462] \"Removing service port\" portName=\"services-3074/externalip-test:http\"\nI0622 22:18:39.642617 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:39.677400 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 22:18:39.683698 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.295294ms\"\nI0622 22:18:40.002104 10 service.go:322] \"Service updated ports\" service=\"kubectl-7799/agnhost-primary\" portCount=0\nI0622 22:18:40.683880 10 service.go:462] \"Removing service port\" portName=\"kubectl-7799/agnhost-primary\"\nI0622 22:18:40.683958 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:40.718696 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:18:40.725512 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.656327ms\"\nI0622 22:18:42.347554 10 service.go:322] \"Service updated ports\" service=\"conntrack-8655/boom-server\" portCount=0\nI0622 22:18:42.347615 10 service.go:462] \"Removing service port\" portName=\"conntrack-8655/boom-server\"\nI0622 22:18:42.347651 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:42.383391 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 22:18:42.388141 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.52673ms\"\nI0622 22:18:43.388383 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:43.425133 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:18:43.430113 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.788719ms\"\nI0622 22:18:52.265566 10 service.go:322] \"Service updated ports\" service=\"sctp-192/sctp-clusterip\" portCount=1\nI0622 22:18:52.265628 10 service.go:437] \"Adding new service port\" portName=\"sctp-192/sctp-clusterip\" servicePort=\"100.65.22.213:5060/SCTP\"\nI0622 22:18:52.265655 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:52.317015 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:18:52.330561 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.935347ms\"\nI0622 22:18:52.330634 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:52.387608 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:18:52.393237 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.622763ms\"\nI0622 22:19:01.908688 10 service.go:322] \"Service updated ports\" service=\"services-4250/test-service-6ntfw\" portCount=1\nI0622 22:19:01.908751 10 service.go:437] \"Adding new service port\" portName=\"services-4250/test-service-6ntfw:http\" servicePort=\"100.65.163.109:80/TCP\"\nI0622 22:19:01.908783 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:01.943183 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 22:19:01.948598 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.853671ms\"\nI0622 22:19:02.018214 10 service.go:322] \"Service updated ports\" service=\"services-4250/test-service-6ntfw\" portCount=1\nI0622 22:19:02.018274 10 service.go:439] \"Updating existing service port\" portName=\"services-4250/test-service-6ntfw:http\" servicePort=\"100.65.163.109:80/TCP\"\nI0622 22:19:02.018306 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:02.052635 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=34\nI0622 22:19:02.057992 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.727126ms\"\nI0622 22:19:02.202826 10 service.go:322] \"Service updated ports\" service=\"services-4250/test-service-6ntfw\" portCount=1\nI0622 22:19:02.283108 10 service.go:322] \"Service updated ports\" service=\"services-4250/test-service-6ntfw\" portCount=0\nI0622 22:19:03.058715 10 service.go:462] \"Removing service port\" portName=\"services-4250/test-service-6ntfw:http\"\nI0622 22:19:03.058910 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:03.110590 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:19:03.121776 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.081371ms\"\nI0622 22:19:03.647096 10 service.go:322] \"Service updated ports\" service=\"sctp-192/sctp-clusterip\" portCount=0\nI0622 22:19:04.122242 10 service.go:462] \"Removing service port\" portName=\"sctp-192/sctp-clusterip\"\nI0622 22:19:04.122305 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:04.158310 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:19:04.163253 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.03321ms\"\nI0622 22:19:06.665684 10 service.go:322] \"Service updated ports\" service=\"webhook-6349/e2e-test-webhook\" portCount=1\nI0622 22:19:06.665750 10 service.go:437] \"Adding new service port\" portName=\"webhook-6349/e2e-test-webhook\" servicePort=\"100.71.217.61:8443/TCP\"\nI0622 22:19:06.665781 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:06.701692 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:19:06.706823 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.992595ms\"\nI0622 22:19:06.706926 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:06.740789 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:19:06.746118 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.246984ms\"\nI0622 22:19:07.971151 10 service.go:322] \"Service updated ports\" service=\"kubectl-2629/agnhost-primary\" portCount=1\nI0622 22:19:07.971214 10 service.go:437] \"Adding new service port\" portName=\"kubectl-2629/agnhost-primary\" servicePort=\"100.67.84.245:6379/TCP\"\nI0622 22:19:07.971253 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:08.004617 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 22:19:08.010496 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.283167ms\"\nI0622 22:19:08.137397 10 service.go:322] \"Service updated ports\" service=\"webhook-6349/e2e-test-webhook\" portCount=0\nI0622 22:19:09.010868 10 service.go:462] \"Removing service port\" portName=\"webhook-6349/e2e-test-webhook\"\nI0622 22:19:09.010953 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:09.084968 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=36\nI0622 22:19:09.094638 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"83.790438ms\"\nI0622 22:19:10.559367 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:10.606626 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:19:10.612361 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.042898ms\"\nI0622 22:19:17.829589 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:17.863857 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=36\nI0622 22:19:17.868927 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.396051ms\"\nI0622 22:19:17.957272 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:18.004335 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:19:18.011318 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.08864ms\"\nI0622 22:19:18.124323 10 service.go:322] \"Service updated ports\" service=\"kubectl-2629/agnhost-primary\" portCount=0\nI0622 22:19:19.011945 10 service.go:462] \"Removing service port\" portName=\"kubectl-2629/agnhost-primary\"\nI0622 22:19:19.012117 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:19.047382 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:19:19.052385 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.464275ms\"\nI0622 22:19:26.287037 10 service.go:322] \"Service updated ports\" service=\"webhook-7646/e2e-test-webhook\" portCount=1\nI0622 22:19:26.287096 10 service.go:437] \"Adding new service port\" portName=\"webhook-7646/e2e-test-webhook\" servicePort=\"100.65.209.249:8443/TCP\"\nI0622 22:19:26.287128 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:26.321967 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:19:26.327151 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.057576ms\"\nI0622 22:19:26.327355 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:26.366290 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:19:26.372594 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.398316ms\"\nI0622 22:19:32.049692 10 service.go:322] \"Service updated ports\" service=\"webhook-7646/e2e-test-webhook\" portCount=0\nI0622 22:19:32.049749 10 service.go:462] \"Removing service port\" portName=\"webhook-7646/e2e-test-webhook\"\nI0622 22:19:32.049786 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:32.083441 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 22:19:32.088228 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.481168ms\"\nI0622 22:19:32.088413 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:32.125678 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:19:32.131602 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.226148ms\"\nI0622 22:20:11.235228 10 service.go:322] \"Service updated ports\" service=\"webhook-1103/e2e-test-webhook\" portCount=1\nI0622 22:20:11.235329 10 service.go:437] \"Adding new service port\" portName=\"webhook-1103/e2e-test-webhook\" servicePort=\"100.66.45.65:8443/TCP\"\nI0622 22:20:11.235363 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:11.282432 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:20:11.287601 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.303324ms\"\nI0622 22:20:11.287692 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:11.333218 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:20:11.339267 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.620484ms\"\nI0622 22:20:13.216890 10 service.go:322] \"Service updated ports\" service=\"webhook-1103/e2e-test-webhook\" portCount=0\nI0622 22:20:13.216953 10 service.go:462] \"Removing service port\" portName=\"webhook-1103/e2e-test-webhook\"\nI0622 22:20:13.216983 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:13.252545 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 22:20:13.257394 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.439365ms\"\nI0622 22:20:13.257575 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:13.291907 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:20:13.299694 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.250601ms\"\nI0622 22:20:22.823910 10 service.go:322] \"Service updated ports\" service=\"proxy-4458/proxy-service-h2bww\" portCount=4\nI0622 22:20:22.823975 10 service.go:437] \"Adding new service port\" portName=\"proxy-4458/proxy-service-h2bww:portname1\" servicePort=\"100.71.176.193:80/TCP\"\nI0622 22:20:22.823989 10 service.go:437] \"Adding new service port\" portName=\"proxy-4458/proxy-service-h2bww:portname2\" servicePort=\"100.71.176.193:81/TCP\"\nI0622 22:20:22.824002 10 service.go:437] \"Adding new service port\" portName=\"proxy-4458/proxy-service-h2bww:tlsportname1\" servicePort=\"100.71.176.193:443/TCP\"\nI0622 22:20:22.824015 10 service.go:437] \"Adding new service port\" portName=\"proxy-4458/proxy-service-h2bww:tlsportname2\" servicePort=\"100.71.176.193:444/TCP\"\nI0622 22:20:22.824046 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:22.859924 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 22:20:22.865074 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.106431ms\"\nI0622 22:20:22.865135 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:22.899225 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 22:20:22.904351 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.236834ms\"\nI0622 22:20:23.707688 10 service.go:322] \"Service updated ports\" service=\"webhook-8946/e2e-test-webhook\" portCount=1\nI0622 22:20:23.905457 10 service.go:437] \"Adding new service port\" portName=\"webhook-8946/e2e-test-webhook\" servicePort=\"100.68.246.134:8443/TCP\"\nI0622 22:20:23.905559 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:23.961398 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=8 numFilterChains=4 numFilterRules=7 numNATChains=17 numNATRules=39\nI0622 22:20:23.969509 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.050318ms\"\nI0622 22:20:25.281904 10 service.go:322] \"Service updated ports\" service=\"webhook-8946/e2e-test-webhook\" portCount=0\nI0622 22:20:25.281948 10 service.go:462] \"Removing service port\" portName=\"webhook-8946/e2e-test-webhook\"\nI0622 22:20:25.281982 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:25.318655 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=17 numNATRules=36\nI0622 22:20:25.324426 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.42469ms\"\nI0622 22:20:26.306471 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:26.363293 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 22:20:26.369152 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.786283ms\"\nI0622 22:20:26.498108 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-1\" portCount=1\nI0622 22:20:27.080309 10 service.go:437] \"Adding new service port\" portName=\"services-2691/up-down-1\" servicePort=\"100.69.65.39:80/TCP\"\nI0622 22:20:27.080455 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:27.115622 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=54\nI0622 22:20:27.120950 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.709755ms\"\nI0622 22:20:28.121251 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:28.156462 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=59\nI0622 22:20:28.161929 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.776891ms\"\nI0622 22:20:29.207477 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:29.247147 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=7 numNATChains=25 numNATRules=47\nI0622 22:20:29.254935 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.587878ms\"\nI0622 22:20:30.490256 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:30.526314 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=13 numFilterChains=4 numFilterRules=7 numNATChains=18 numNATRules=42\nI0622 22:20:30.531944 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.748954ms\"\nI0622 22:20:33.099033 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:33.134467 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=13 numFilterChains=4 numFilterRules=7 numNATChains=18 numNATRules=42\nI0622 22:20:33.139820 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.885915ms\"\nI0622 22:20:33.279723 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:33.313700 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=9 numFilterChains=4 numFilterRules=7 numNATChains=18 numNATRules=42\nI0622 22:20:33.321631 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.970531ms\"\nI0622 22:20:34.680190 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:34.717639 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=19 numNATRules=45\nI0622 22:20:34.723180 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.065433ms\"\nI0622 22:20:35.679225 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-2\" portCount=1\nI0622 22:20:35.679281 10 service.go:437] \"Adding new service port\" portName=\"services-2691/up-down-2\" servicePort=\"100.64.247.246:80/TCP\"\nI0622 22:20:35.679317 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:35.716280 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=10 numFilterChains=4 numFilterRules=8 numNATChains=19 numNATRules=45\nI0622 22:20:35.722170 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.890591ms\"\nI0622 22:20:36.722923 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:36.759597 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=10 numFilterChains=4 numFilterRules=8 numNATChains=19 numNATRules=45\nI0622 22:20:36.765142 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.28108ms\"\nI0622 22:20:38.627845 10 service.go:322] \"Service updated ports\" service=\"proxy-4458/proxy-service-h2bww\" portCount=0\nI0622 22:20:38.627900 10 service.go:462] \"Removing service port\" portName=\"proxy-4458/proxy-service-h2bww:portname1\"\nI0622 22:20:38.627913 10 service.go:462] \"Removing service port\" portName=\"proxy-4458/proxy-service-h2bww:portname2\"\nI0622 22:20:38.627922 10 service.go:462] \"Removing service port\" portName=\"proxy-4458/proxy-service-h2bww:tlsportname1\"\nI0622 22:20:38.627930 10 service.go:462] \"Removing service port\" portName=\"proxy-4458/proxy-service-h2bww:tlsportname2\"\nI0622 22:20:38.627969 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:38.668495 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 22:20:38.674272 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.368571ms\"\nI0622 22:20:38.686753 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:38.721552 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 22:20:38.727175 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.48373ms\"\nI0622 22:20:39.727441 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:39.765830 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 22:20:39.771397 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.04704ms\"\nI0622 22:20:40.771654 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:40.807389 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=53\nI0622 22:20:40.812868 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.32008ms\"\nI0622 22:20:44.896679 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:44.960390 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=56\nI0622 22:20:44.967158 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.628831ms\"\nI0622 22:20:51.388037 10 service.go:322] \"Service updated ports\" service=\"webhook-2249/e2e-test-webhook\" portCount=1\nI0622 22:20:51.388101 10 service.go:437] \"Adding new service port\" portName=\"webhook-2249/e2e-test-webhook\" servicePort=\"100.71.212.228:8443/TCP\"\nI0622 22:20:51.388138 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:51.427144 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 22:20:51.440081 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.986021ms\"\nI0622 22:20:51.440212 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:51.476017 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=61\nI0622 22:20:51.481871 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.718444ms\"\nI0622 22:20:51.722177 10 service.go:322] \"Service updated ports\" service=\"services-5697/nodeport-service\" portCount=1\nI0622 22:20:51.759961 10 service.go:322] \"Service updated ports\" service=\"services-5697/externalsvc\" portCount=1\nI0622 22:20:52.482226 10 service.go:437] \"Adding new service port\" portName=\"services-5697/externalsvc\" servicePort=\"100.71.106.98:80/TCP\"\nI0622 22:20:52.482302 10 service.go:437] \"Adding new service port\" portName=\"services-5697/nodeport-service\" servicePort=\"100.70.77.177:80/TCP\"\nI0622 22:20:52.482377 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:52.520925 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=14 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=61\nI0622 22:20:52.526516 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.369216ms\"\nI0622 22:20:52.761031 10 service.go:322] \"Service updated ports\" service=\"webhook-2249/e2e-test-webhook\" portCount=0\nI0622 22:20:53.526786 10 service.go:462] \"Removing service port\" portName=\"webhook-2249/e2e-test-webhook\"\nI0622 22:20:53.526887 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:53.564046 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=58\nI0622 22:20:53.569636 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.913929ms\"\nI0622 22:20:54.569888 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:54.608008 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=61\nI0622 22:20:54.622564 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.745657ms\"\nI0622 22:20:58.491626 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:58.537705 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=64\nI0622 22:20:58.545209 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.647551ms\"\nI0622 22:21:00.629100 10 service.go:322] \"Service updated ports\" service=\"conntrack-8214/svc-udp\" portCount=1\nI0622 22:21:00.629159 10 service.go:437] \"Adding new service port\" portName=\"conntrack-8214/svc-udp:udp\" servicePort=\"100.70.207.139:80/UDP\"\nI0622 22:21:00.629200 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:00.672510 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=7 numNATChains=26 numNATRules=64\nI0622 22:21:00.679370 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.21785ms\"\nI0622 22:21:00.679460 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:00.717094 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=7 numNATChains=26 numNATRules=64\nI0622 22:21:00.723751 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.335138ms\"\nI0622 22:21:00.980654 10 service.go:322] \"Service updated ports\" service=\"services-5697/nodeport-service\" portCount=0\nI0622 22:21:01.724695 10 service.go:462] \"Removing service port\" portName=\"services-5697/nodeport-service\"\nI0622 22:21:01.724775 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:01.772159 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=64\nI0622 22:21:01.779055 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.377398ms\"\nI0622 22:21:03.398973 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:03.450260 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=62\nI0622 22:21:03.459231 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.340299ms\"\nI0622 22:21:04.460123 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:04.504184 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=56\nI0622 22:21:04.511173 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.165965ms\"\nI0622 22:21:05.280538 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:05.317437 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=6 numNATChains=22 numNATRules=53\nI0622 22:21:05.323094 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.632557ms\"\nI0622 22:21:05.731735 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:05.772323 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=6 numNATChains=22 numNATRules=53\nI0622 22:21:05.777854 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.197949ms\"\nI0622 22:21:06.778659 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:06.814500 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=22 numNATRules=53\nI0622 22:21:06.820289 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.748109ms\"\nI0622 22:21:07.711187 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:07.749159 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=22 numNATRules=53\nI0622 22:21:07.755295 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.169787ms\"\nI0622 22:21:07.802573 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-1\" portCount=0\nI0622 22:21:08.755474 10 service.go:462] \"Removing service port\" portName=\"services-2691/up-down-1\"\nI0622 22:21:08.755638 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-8214/svc-udp:udp\" clusterIP=\"100.70.207.139\"\nI0622 22:21:08.755716 10 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-8214/svc-udp:udp\" nodePort=31112\nI0622 22:21:08.755726 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:08.807823 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=61\nI0622 22:21:08.826948 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.438771ms\"\nI0622 22:21:09.757094 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:09.802564 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=59\nI0622 22:21:09.809617 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.588539ms\"\nI0622 22:21:10.809857 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:10.857779 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=55\nI0622 22:21:10.863689 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.922201ms\"\nI0622 22:21:11.762594 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:11.798419 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 22:21:11.804192 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.660385ms\"\nI0622 22:21:12.804673 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:12.857666 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 22:21:12.864824 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.247041ms\"\nI0622 22:21:14.980267 10 service.go:322] \"Service updated ports\" service=\"apply-4603/test-svc\" portCount=1\nI0622 22:21:14.980328 10 service.go:437] \"Adding new service port\" portName=\"apply-4603/test-svc\" servicePort=\"100.65.40.214:8080/UDP\"\nI0622 22:21:14.980366 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:15.017580 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 22:21:15.023418 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.0958ms\"\nI0622 22:21:15.326450 10 service.go:322] \"Service updated ports\" service=\"webhook-4223/e2e-test-webhook\" portCount=1\nI0622 22:21:15.326501 10 service.go:437] \"Adding new service port\" portName=\"webhook-4223/e2e-test-webhook\" servicePort=\"100.67.124.14:8443/TCP\"\nI0622 22:21:15.326540 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:15.362572 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=22 numNATRules=53\nI0622 22:21:15.368435 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.937275ms\"\nI0622 22:21:15.951358 10 service.go:322] \"Service updated ports\" service=\"services-5697/externalsvc\" portCount=0\nI0622 22:21:16.003996 10 service.go:462] \"Removing service port\" portName=\"services-5697/externalsvc\"\nI0622 22:21:16.004098 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:16.040759 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:21:16.047094 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.104126ms\"\nI0622 22:21:17.047980 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:17.083422 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:21:17.089003 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.095415ms\"\nI0622 22:21:19.736335 10 service.go:322] \"Service updated ports\" service=\"webhook-4223/e2e-test-webhook\" portCount=0\nI0622 22:21:19.736397 10 service.go:462] \"Removing service port\" portName=\"webhook-4223/e2e-test-webhook\"\nI0622 22:21:19.736438 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:19.776391 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=55\nI0622 22:21:19.783177 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.777729ms\"\nI0622 22:21:19.783369 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:19.818454 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 22:21:19.824245 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.022384ms\"\nI0622 22:21:20.209174 10 service.go:322] \"Service updated ports\" service=\"apply-4603/test-svc\" portCount=0\nI0622 22:21:20.824676 10 service.go:462] \"Removing service port\" portName=\"apply-4603/test-svc\"\nI0622 22:21:20.824744 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:20.911088 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=53\nI0622 22:21:20.939482 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"114.838027ms\"\nI0622 22:21:21.938582 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:22.001294 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=56\nI0622 22:21:22.007549 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.035583ms\"\nI0622 22:21:22.961537 10 service.go:322] \"Service updated ports\" service=\"services-8737/service-proxy-toggled\" portCount=1\nI0622 22:21:22.961593 10 service.go:437] \"Adding new service port\" portName=\"services-8737/service-proxy-toggled\" servicePort=\"100.66.0.218:80/TCP\"\nI0622 22:21:22.961635 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:22.996507 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 22:21:23.002411 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.824184ms\"\nI0622 22:21:23.358278 10 service.go:322] \"Service updated ports\" service=\"services-3636/nodeport-collision-1\" portCount=1\nI0622 22:21:23.524425 10 service.go:322] \"Service updated ports\" service=\"services-3636/nodeport-collision-2\" portCount=1\nI0622 22:21:23.824712 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:23.863319 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=59\nI0622 22:21:23.876495 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.879331ms\"\nI0622 22:21:24.214949 10 service.go:322] \"Service updated ports\" service=\"dns-4380/test-service-2\" portCount=1\nI0622 22:21:24.877351 10 service.go:437] \"Adding new service port\" portName=\"dns-4380/test-service-2:http\" servicePort=\"100.65.217.220:80/TCP\"\nI0622 22:21:24.877433 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:24.925713 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:21:24.934896 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.579054ms\"\nI0622 22:21:25.936016 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:25.979074 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=61\nI0622 22:21:25.986842 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.93489ms\"\nI0622 22:21:27.596362 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:27.641758 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=64\nI0622 22:21:27.648375 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.088162ms\"\nI0622 22:21:33.896431 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:33.934844 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=69\nI0622 22:21:33.941373 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.009201ms\"\nI0622 22:21:34.048189 10 service.go:322] \"Service updated ports\" service=\"webhook-6550/e2e-test-webhook\" portCount=1\nI0622 22:21:34.048261 10 service.go:437] \"Adding new service port\" portName=\"webhook-6550/e2e-test-webhook\" servicePort=\"100.65.107.216:8443/TCP\"\nI0622 22:21:34.048306 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:34.106904 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=28 numNATRules=69\nI0622 22:21:34.115249 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"67.006764ms\"\nI0622 22:21:35.116216 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:35.157159 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=16 numFilterChains=4 numFilterRules=3 numNATChains=30 numNATRules=74\nI0622 22:21:35.164558 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.429852ms\"\nI0622 22:21:36.310156 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-3\" portCount=1\nI0622 22:21:36.310218 10 service.go:437] \"Adding new service port\" portName=\"services-2691/up-down-3\" servicePort=\"100.66.189.159:80/TCP\"\nI0622 22:21:36.310266 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:36.345773 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=74\nI0622 22:21:36.357113 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.895405ms\"\nI0622 22:21:37.357493 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:37.394030 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=74\nI0622 22:21:37.400522 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.089079ms\"\nI0622 22:21:37.889472 10 service.go:322] \"Service updated ports\" service=\"webhook-6550/e2e-test-webhook\" portCount=0\nI0622 22:21:37.939390 10 service.go:462] \"Removing service port\" portName=\"webhook-6550/e2e-test-webhook\"\nI0622 22:21:37.939490 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:37.991749 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=71\nI0622 22:21:37.998548 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.166412ms\"\nI0622 22:21:38.658269 10 service.go:322] \"Service updated ports\" service=\"resourcequota-1890/test-service\" portCount=1\nI0622 22:21:38.709356 10 service.go:322] \"Service updated ports\" service=\"resourcequota-1890/test-service-np\" portCount=1\nI0622 22:21:38.925466 10 service.go:322] \"Service updated ports\" service=\"conntrack-8214/svc-udp\" portCount=0\nI0622 22:21:38.925532 10 service.go:437] \"Adding new service port\" portName=\"resourcequota-1890/test-service\" servicePort=\"100.64.177.183:80/TCP\"\nI0622 22:21:38.925551 10 service.go:437] \"Adding new service port\" portName=\"resourcequota-1890/test-service-np\" servicePort=\"100.67.152.30:80/TCP\"\nI0622 22:21:38.925562 10 service.go:462] \"Removing service port\" portName=\"conntrack-8214/svc-udp:udp\"\nI0622 22:21:38.925629 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:38.963593 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=15 numFilterChains=4 numFilterRules=6 numNATChains=30 numNATRules=69\nI0622 22:21:38.975820 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.295546ms\"\nI0622 22:21:39.976085 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:40.011485 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=15 numFilterChains=4 numFilterRules=6 numNATChains=27 numNATRules=66\nI0622 22:21:40.017204 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.183409ms\"\nI0622 22:21:40.863749 10 service.go:322] \"Service updated ports\" service=\"resourcequota-1890/test-service\" portCount=0\nI0622 22:21:40.922960 10 service.go:322] \"Service updated ports\" service=\"resourcequota-1890/test-service-np\" portCount=0\nI0622 22:21:40.923014 10 service.go:462] \"Removing service port\" portName=\"resourcequota-1890/test-service\"\nI0622 22:21:40.923026 10 service.go:462] \"Removing service port\" portName=\"resourcequota-1890/test-service-np\"\nI0622 22:21:40.923108 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:40.967980 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=69\nI0622 22:21:40.979915 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.895608ms\"\nI0622 22:21:43.192990 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:43.271620 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=17 numFilterChains=4 numFilterRules=3 numNATChains=29 numNATRules=72\nI0622 22:21:43.278553 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"85.645948ms\"\nI0622 22:21:51.519567 10 service.go:322] \"Service updated ports\" service=\"services-8737/service-proxy-toggled\" portCount=0\nI0622 22:21:51.519611 10 service.go:462] \"Removing service port\" portName=\"services-8737/service-proxy-toggled\"\nI0622 22:21:51.519659 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:51.556466 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=29 numNATRules=65\nI0622 22:21:51.563468 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.851356ms\"\nI0622 22:21:51.563705 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:51.609578 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=61\nI0622 22:21:51.614906 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.399871ms\"\nI0622 22:21:56.211401 10 service.go:322] \"Service updated ports\" service=\"services-8737/service-proxy-toggled\" portCount=1\nI0622 22:21:56.211460 10 service.go:437] \"Adding new service port\" portName=\"services-8737/service-proxy-toggled\" servicePort=\"100.66.0.218:80/TCP\"\nI0622 22:21:56.211695 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:56.259057 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=61\nI0622 22:21:56.267628 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.172093ms\"\nI0622 22:21:56.267735 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:56.303886 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=17 numFilterChains=4 numFilterRules=3 numNATChains=29 numNATRules=72\nI0622 22:21:56.310408 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.740241ms\"\nI0622 22:22:00.685403 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:00.718949 10 service.go:322] \"Service updated ports\" service=\"dns-4380/test-service-2\" portCount=0\nI0622 22:22:00.728367 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=69\nI0622 22:22:00.735378 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.024989ms\"\nI0622 22:22:00.735423 10 service.go:462] \"Removing service port\" portName=\"dns-4380/test-service-2:http\"\nI0622 22:22:00.735472 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:00.785357 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=16 numFilterChains=4 numFilterRules=3 numNATChains=27 numNATRules=67\nI0622 22:22:00.792107 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.68288ms\"\nI0622 22:22:01.793620 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:01.844921 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=16 numFilterChains=4 numFilterRules=3 numNATChains=27 numNATRules=67\nI0622 22:22:01.853668 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.113273ms\"\nI0622 22:22:03.948581 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:03.985209 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=27 numNATRules=58\nI0622 22:22:03.990435 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.973305ms\"\nI0622 22:22:03.990551 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:04.026651 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=16 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=48\nI0622 22:22:04.032731 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.255053ms\"\nI0622 22:22:05.033541 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:05.081329 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=45\nI0622 22:22:05.086554 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.25636ms\"\nI0622 22:22:05.135758 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-2\" portCount=0\nI0622 22:22:05.151079 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-3\" portCount=0\nI0622 22:22:06.086857 10 service.go:462] \"Removing service port\" portName=\"services-2691/up-down-2\"\nI0622 22:22:06.086910 10 service.go:462] \"Removing service port\" portName=\"services-2691/up-down-3\"\nI0622 22:22:06.086961 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:06.122076 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=45\nI0622 22:22:06.126879 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.050263ms\"\nI0622 22:22:26.059235 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:26.095118 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=43\nI0622 22:22:26.100186 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.040111ms\"\nI0622 22:22:26.100454 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:26.133619 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=18 numNATRules=37\nI0622 22:22:26.138557 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.331025ms\"\nI0622 22:22:26.182792 10 service.go:322] \"Service updated ports\" service=\"services-8737/service-proxy-toggled\" portCount=0\nI0622 22:22:27.138723 10 service.go:462] \"Removing service port\" portName=\"services-8737/service-proxy-toggled\"\nI0622 22:22:27.138950 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:27.186921 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:22:27.196572 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.882354ms\"\nI0622 22:22:31.102212 10 service.go:322] \"Service updated ports\" service=\"conntrack-1762/svc-udp\" portCount=1\nI0622 22:22:31.102274 10 service.go:437] \"Adding new service port\" portName=\"conntrack-1762/svc-udp:udp\" servicePort=\"100.66.89.30:80/UDP\"\nI0622 22:22:31.102308 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:31.137300 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:22:31.142557 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.284074ms\"\nI0622 22:22:31.142828 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:31.184242 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:22:31.192611 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.01229ms\"\nI0622 22:22:40.701579 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-1762/svc-udp:udp\" clusterIP=\"100.66.89.30\"\nI0622 22:22:40.701608 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:40.739688 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:22:40.755398 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.951099ms\"\nI0622 22:22:46.201106 10 service.go:322] \"Service updated ports\" service=\"services-6764/nodeport-update-service\" portCount=1\nI0622 22:22:46.201163 10 service.go:437] \"Adding new service port\" portName=\"services-6764/nodeport-update-service\" servicePort=\"100.66.51.116:80/TCP\"\nI0622 22:22:46.201202 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:46.237460 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 22:22:46.243441 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.280112ms\"\nI0622 22:22:46.243511 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:46.275556 10 service.go:322] \"Service updated ports\" service=\"services-6764/nodeport-update-service\" portCount=1\nI0622 22:22:46.282411 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 22:22:46.287900 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.411532ms\"\nI0622 22:22:47.288118 10 service.go:437] \"Adding new service port\" portName=\"services-6764/nodeport-update-service:tcp-port\" servicePort=\"100.66.51.116:80/TCP\"\nI0622 22:22:47.288150 10 service.go:462] \"Removing service port\" portName=\"services-6764/nodeport-update-service\"\nI0622 22:22:47.288187 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:47.325705 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 22:22:47.332976 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.885673ms\"\nI0622 22:22:47.554391 10 service.go:322] \"Service updated ports\" service=\"webhook-6812/e2e-test-webhook\" portCount=1\nI0622 22:22:48.333240 10 service.go:437] \"Adding new service port\" portName=\"webhook-6812/e2e-test-webhook\" servicePort=\"100.64.215.29:8443/TCP\"\nI0622 22:22:48.333330 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:48.399470 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=44\nI0622 22:22:48.406129 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"72.95683ms\"\nI0622 22:22:48.932844 10 service.go:322] \"Service updated ports\" service=\"webhook-6812/e2e-test-webhook\" portCount=0\nI0622 22:22:49.407317 10 service.go:462] \"Removing service port\" portName=\"webhook-6812/e2e-test-webhook\"\nI0622 22:22:49.407434 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:49.442834 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=49\nI0622 22:22:49.448544 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.378202ms\"\nI0622 22:22:53.011386 10 service.go:322] \"Service updated ports\" service=\"endpointslicemirroring-2582/example-custom-endpoints\" portCount=1\nI0622 22:22:53.011455 10 service.go:437] \"Adding new service port\" portName=\"endpointslicemirroring-2582/example-custom-endpoints:example\" servicePort=\"100.67.187.138:80/TCP\"\nI0622 22:22:53.011497 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:53.051165 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 22:22:53.057281 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.833948ms\"\nI0622 22:22:53.085967 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:53.123440 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 22:22:53.129062 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.144791ms\"\nI0622 22:22:54.129331 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:54.166951 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 22:22:54.172504 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.253487ms\"\nI0622 22:22:55.186559 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:55.222430 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=50\nI0622 22:22:55.228045 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.561143ms\"\nI0622 22:22:58.484737 10 service.go:322] \"Service updated ports\" service=\"endpointslicemirroring-2582/example-custom-endpoints\" portCount=0\nI0622 22:22:58.484787 10 service.go:462] \"Removing service port\" portName=\"endpointslicemirroring-2582/example-custom-endpoints:example\"\nI0622 22:22:58.484832 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:58.527536 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 22:22:58.533816 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.028497ms\"\nI0622 22:22:58.598278 10 service.go:322] \"Service updated ports\" service=\"services-1344/clusterip-service\" portCount=1\nI0622 22:22:58.598329 10 service.go:437] \"Adding new service port\" portName=\"services-1344/clusterip-service\" servicePort=\"100.67.161.58:80/TCP\"\nI0622 22:22:58.598371 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:58.642846 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=50\nI0622 22:22:58.643150 10 service.go:322] \"Service updated ports\" service=\"services-1344/externalsvc\" portCount=1\nI0622 22:22:58.651153 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.827123ms\"\nI0622 22:22:59.618244 10 service.go:437] \"Adding new service port\" portName=\"services-1344/externalsvc\" servicePort=\"100.64.245.54:80/TCP\"\nI0622 22:22:59.618357 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:59.661080 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:22:59.666859 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.637626ms\"\nI0622 22:23:00.667056 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:00.710918 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=56\nI0622 22:23:00.729934 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.988838ms\"\nI0622 22:23:04.803182 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:04.846814 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:23:04.852563 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.449774ms\"\nI0622 22:23:07.862219 10 service.go:322] \"Service updated ports\" service=\"services-1344/clusterip-service\" portCount=0\nI0622 22:23:07.862377 10 service.go:462] \"Removing service port\" portName=\"services-1344/clusterip-service\"\nI0622 22:23:07.862437 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:07.910082 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=58\nI0622 22:23:07.916308 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.928008ms\"\nI0622 22:23:10.814692 10 service.go:322] \"Service updated ports\" service=\"services-6764/nodeport-update-service\" portCount=2\nI0622 22:23:10.814911 10 service.go:439] \"Updating existing service port\" portName=\"services-6764/nodeport-update-service:tcp-port\" servicePort=\"100.66.51.116:80/TCP\"\nI0622 22:23:10.814949 10 service.go:437] \"Adding new service port\" portName=\"services-6764/nodeport-update-service:udp-port\" servicePort=\"100.66.51.116:80/UDP\"\nI0622 22:23:10.814998 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:10.850648 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=58\nI0622 22:23:10.856869 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.094462ms\"\nI0622 22:23:10.857120 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"services-6764/nodeport-update-service:udp-port\" clusterIP=\"100.66.51.116\"\nI0622 22:23:10.857221 10 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"services-6764/nodeport-update-service:udp-port\" nodePort=31929\nI0622 22:23:10.857235 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:10.893249 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=69\nI0622 22:23:10.913947 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.02953ms\"\nI0622 22:23:15.337933 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:15.372054 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=28 numNATRules=66\nI0622 22:23:15.384529 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.69374ms\"\nI0622 22:23:15.384634 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:15.420545 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=64\nI0622 22:23:15.426341 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.769728ms\"\nI0622 22:23:15.592601 10 service.go:322] \"Service updated ports\" service=\"conntrack-1762/svc-udp\" portCount=0\nI0622 22:23:16.427338 10 service.go:462] \"Removing service port\" portName=\"conntrack-1762/svc-udp:udp\"\nI0622 22:23:16.427443 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:16.471850 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=64\nI0622 22:23:16.491378 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.061285ms\"\nI0622 22:23:17.491673 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:17.527244 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=59\nI0622 22:23:17.532897 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.329044ms\"\nI0622 22:23:18.533937 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:18.569162 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 22:23:18.575038 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.197885ms\"\nI0622 22:23:19.485603 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:19.521149 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 22:23:19.526519 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.982721ms\"\nI0622 22:23:19.720900 10 service.go:322] \"Service updated ports\" service=\"services-1344/externalsvc\" portCount=0\nI0622 22:23:20.526743 10 service.go:462] \"Removing service port\" portName=\"services-1344/externalsvc\"\nI0622 22:23:20.526840 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:20.564760 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=56\nI0622 22:23:20.570484 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.797369ms\"\nI0622 22:23:21.773762 10 service.go:322] \"Service updated ports\" service=\"webhook-8046/e2e-test-webhook\" portCount=1\nI0622 22:23:21.773812 10 service.go:437] \"Adding new service port\" portName=\"webhook-8046/e2e-test-webhook\" servicePort=\"100.68.65.52:8443/TCP\"\nI0622 22:23:21.773853 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:21.812451 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 22:23:21.818053 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.24235ms\"\nI0622 22:23:22.818250 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:22.853563 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=61\nI0622 22:23:22.859018 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.832029ms\"\nI0622 22:23:23.380628 10 service.go:322] \"Service updated ports\" service=\"webhook-2643/e2e-test-webhook\" portCount=1\nI0622 22:23:23.380691 10 service.go:437] \"Adding new service port\" portName=\"webhook-2643/e2e-test-webhook\" servicePort=\"100.70.239.243:8443/TCP\"\nI0622 22:23:23.380735 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:23.415669 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=61\nI0622 22:23:23.422380 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.695655ms\"\nI0622 22:23:23.449932 10 service.go:322] \"Service updated ports\" service=\"webhook-8046/e2e-test-webhook\" portCount=0\nI0622 22:23:24.422542 10 service.go:462] \"Removing service port\" portName=\"webhook-8046/e2e-test-webhook\"\nI0622 22:23:24.422667 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:24.459997 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=27 numNATRules=63\nI0622 22:23:24.466422 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.916193ms\"\nI0622 22:23:37.314136 10 service.go:322] \"Service updated ports\" service=\"webhook-2643/e2e-test-webhook\" portCount=0\nI0622 22:23:37.314179 10 service.go:462] \"Removing service port\" portName=\"webhook-2643/e2e-test-webhook\"\nI0622 22:23:37.314229 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:37.354034 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=58\nI0622 22:23:37.359692 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.505456ms\"\nI0622 22:23:37.359845 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:37.394822 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=56\nI0622 22:23:37.400377 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.64055ms\"\nI0622 22:23:49.292492 10 service.go:322] \"Service updated ports\" service=\"services-6764/nodeport-update-service\" portCount=0\nI0622 22:23:49.292543 10 service.go:462] \"Removing service port\" portName=\"services-6764/nodeport-update-service:tcp-port\"\nI0622 22:23:49.292557 10 service.go:462] \"Removing service port\" portName=\"services-6764/nodeport-update-service:udp-port\"\nI0622 22:23:49.292606 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:49.327239 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=42\nI0622 22:23:49.338379 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.834957ms\"\nI0622 22:23:49.338499 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:49.370679 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:23:49.376073 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.653413ms\"\nI0622 22:24:38.776431 10 service.go:322] \"Service updated ports\" service=\"sctp-9084/sctp-endpoint-test\" portCount=1\nI0622 22:24:38.776487 10 service.go:437] \"Adding new service port\" portName=\"sctp-9084/sctp-endpoint-test\" servicePort=\"100.67.48.18:5060/SCTP\"\nI0622 22:24:38.776560 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:24:38.837858 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:24:38.844991 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.506232ms\"\nI0622 22:24:38.845067 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:24:38.903977 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:24:38.912113 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"67.076926ms\"\nI0622 22:24:41.670150 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:24:41.707609 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:24:41.713550 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.455258ms\"\nI0622 22:24:43.312787 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:24:43.352999 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=36\nI0622 22:24:43.358564 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.862311ms\"\nI0622 22:24:43.358692 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:24:43.394152 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:24:43.399300 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.692783ms\"\nI0622 22:24:54.539141 10 service.go:322] \"Service updated ports\" service=\"sctp-9084/sctp-endpoint-test\" portCount=0\nI0622 22:24:54.539188 10 service.go:462] \"Removing service port\" portName=\"sctp-9084/sctp-endpoint-test\"\nI0622 22:24:54.539232 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:24:54.585469 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:24:54.593809 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.612699ms\"\nI0622 22:24:54.593900 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:24:54.639382 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:24:54.647564 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.695436ms\"\nI0622 22:25:01.466796 10 service.go:322] \"Service updated ports\" service=\"webhook-3817/e2e-test-webhook\" portCount=1\nI0622 22:25:01.466857 10 service.go:437] \"Adding new service port\" portName=\"webhook-3817/e2e-test-webhook\" servicePort=\"100.69.156.30:8443/TCP\"\nI0622 22:25:01.466901 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:01.506371 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:25:01.511462 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.611753ms\"\nI0622 22:25:01.511553 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:01.546104 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:25:01.550998 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.496234ms\"\nI0622 22:25:06.568600 10 service.go:322] \"Service updated ports\" service=\"webhook-6750/e2e-test-webhook\" portCount=1\nI0622 22:25:06.568663 10 service.go:437] \"Adding new service port\" portName=\"webhook-6750/e2e-test-webhook\" servicePort=\"100.65.3.105:8443/TCP\"\nI0622 22:25:06.568712 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:06.602580 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 22:25:06.607786 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.128548ms\"\nI0622 22:25:06.608079 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:06.644807 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=44\nI0622 22:25:06.650700 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.85433ms\"\nI0622 22:25:08.583944 10 service.go:322] \"Service updated ports\" service=\"webhook-6750/e2e-test-webhook\" portCount=0\nI0622 22:25:08.583995 10 service.go:462] \"Removing service port\" portName=\"webhook-6750/e2e-test-webhook\"\nI0622 22:25:08.584037 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:08.619013 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=41\nI0622 22:25:08.624181 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.184124ms\"\nI0622 22:25:08.624428 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:08.667471 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:25:08.673229 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.003084ms\"\nI0622 22:25:09.724858 10 service.go:322] \"Service updated ports\" service=\"dns-9691/test-service-2\" portCount=1\nI0622 22:25:09.724926 10 service.go:437] \"Adding new service port\" portName=\"dns-9691/test-service-2:http\" servicePort=\"100.68.116.112:80/TCP\"\nI0622 22:25:09.724969 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:09.764951 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 22:25:09.770997 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.07927ms\"\nI0622 22:25:09.833797 10 service.go:322] \"Service updated ports\" service=\"webhook-1285/e2e-test-webhook\" portCount=1\nI0622 22:25:10.771202 10 service.go:437] \"Adding new service port\" portName=\"webhook-1285/e2e-test-webhook\" servicePort=\"100.65.119.5:8443/TCP\"\nI0622 22:25:10.771326 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:10.807500 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=44\nI0622 22:25:10.812637 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.485432ms\"\nI0622 22:25:11.917290 10 service.go:322] \"Service updated ports\" service=\"services-7820/nodeport-reuse\" portCount=1\nI0622 22:25:11.917344 10 service.go:437] \"Adding new service port\" portName=\"services-7820/nodeport-reuse\" servicePort=\"100.67.138.140:80/TCP\"\nI0622 22:25:11.917383 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:11.959477 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=44\nI0622 22:25:11.964678 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.340757ms\"\nI0622 22:25:11.969587 10 service.go:322] \"Service updated ports\" service=\"services-7820/nodeport-reuse\" portCount=0\nI0622 22:25:12.964887 10 service.go:462] \"Removing service port\" portName=\"services-7820/nodeport-reuse\"\nI0622 22:25:12.965018 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:13.001714 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=49\nI0622 22:25:13.006811 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.961399ms\"\nI0622 22:25:13.277512 10 service.go:322] \"Service updated ports\" service=\"webhook-3817/e2e-test-webhook\" portCount=0\nI0622 22:25:14.007016 10 service.go:462] \"Removing service port\" portName=\"webhook-3817/e2e-test-webhook\"\nI0622 22:25:14.007128 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:14.022958 10 service.go:322] \"Service updated ports\" service=\"webhook-1285/e2e-test-webhook\" portCount=0\nI0622 22:25:14.042085 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=46\nI0622 22:25:14.047095 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.106491ms\"\nI0622 22:25:14.660878 10 service.go:322] \"Service updated ports\" service=\"services-7820/nodeport-reuse\" portCount=1\nI0622 22:25:14.660934 10 service.go:462] \"Removing service port\" portName=\"webhook-1285/e2e-test-webhook\"\nI0622 22:25:14.660956 10 service.go:437] \"Adding new service port\" portName=\"services-7820/nodeport-reuse\" servicePort=\"100.69.141.7:80/TCP\"\nI0622 22:25:14.661010 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:14.697853 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=41\nI0622 22:25:14.703198 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.264321ms\"\nI0622 22:25:14.709152 10 service.go:322] \"Service updated ports\" service=\"services-7820/nodeport-reuse\" portCount=0\nI0622 22:25:15.703446 10 service.go:462] \"Removing service port\" portName=\"services-7820/nodeport-reuse\"\nI0622 22:25:15.703531 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:15.740459 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:25:15.745526 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.116847ms\"\nI0622 22:25:45.853669 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:45.888711 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=36\nI0622 22:25:45.892288 10 service.go:322] \"Service updated ports\" service=\"dns-9691/test-service-2\" portCount=0\nI0622 22:25:45.894206 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.615695ms\"\nI0622 22:25:45.894247 10 service.go:462] \"Removing service port\" portName=\"dns-9691/test-service-2:http\"\nI0622 22:25:45.894442 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:45.929170 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:25:45.934145 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.902531ms\"\nI0622 22:25:46.934374 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:25:46.974893 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:25:46.980211 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.888508ms\"\nI0622 22:26:07.953338 10 service.go:322] \"Service updated ports\" service=\"proxy-7842/e2e-proxy-test-service\" portCount=1\nI0622 22:26:07.953397 10 service.go:437] \"Adding new service port\" portName=\"proxy-7842/e2e-proxy-test-service\" servicePort=\"100.64.11.226:80/TCP\"\nI0622 22:26:07.953491 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:08.040276 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:26:08.046197 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"92.80323ms\"\nI0622 22:26:08.046297 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:08.096411 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:26:08.104346 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.102369ms\"\nI0622 22:26:13.677347 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:13.724282 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=36\nI0622 22:26:13.730374 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.210432ms\"\nI0622 22:26:13.789271 10 service.go:322] \"Service updated ports\" service=\"proxy-7842/e2e-proxy-test-service\" portCount=0\nI0622 22:26:13.789319 10 service.go:462] \"Removing service port\" portName=\"proxy-7842/e2e-proxy-test-service\"\nI0622 22:26:13.789357 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:13.840996 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:26:13.847712 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.387831ms\"\nI0622 22:26:14.847985 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:14.881467 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:26:14.886803 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.904689ms\"\nI0622 22:26:26.490553 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6q89h\" portCount=1\nI0622 22:26:26.490629 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-6q89h\" servicePort=\"100.66.74.245:80/TCP\"\nI0622 22:26:26.490668 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:26.526714 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:26:26.532127 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.499262ms\"\nI0622 22:26:26.532231 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:26.542039 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-fw5cn\" portCount=1\nI0622 22:26:26.549999 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ctdgx\" portCount=1\nI0622 22:26:26.567152 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:26:26.572623 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.452798ms\"\nI0622 22:26:26.573462 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pdk5l\" portCount=1\nI0622 22:26:26.610248 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-58zmc\" portCount=1\nI0622 22:26:26.642851 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-m7zds\" portCount=1\nI0622 22:26:26.651285 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xv9l6\" portCount=1\nI0622 22:26:26.662102 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7txmw\" portCount=1\nI0622 22:26:26.673385 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-qwpwx\" portCount=1\nI0622 22:26:26.687408 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-dlvqf\" portCount=1\nI0622 22:26:26.693163 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ds7k5\" portCount=1\nI0622 22:26:26.704295 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-48gk4\" portCount=1\nI0622 22:26:26.721828 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-btx55\" portCount=1\nI0622 22:26:26.731874 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7fvtd\" portCount=1\nI0622 22:26:26.746946 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-x9f2h\" portCount=1\nI0622 22:26:26.753635 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-67vtf\" portCount=1\nI0622 22:26:26.759084 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tgjxz\" portCount=1\nI0622 22:26:26.772122 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gpt84\" portCount=1\nI0622 22:26:26.787290 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-dw9tc\" portCount=1\nI0622 22:26:26.807299 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-qd5gp\" portCount=1\nI0622 22:26:26.821146 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-b9cv5\" portCount=1\nI0622 22:26:26.833222 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7ft96\" portCount=1\nI0622 22:26:26.842986 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xjgfb\" portCount=1\nI0622 22:26:26.856941 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-bhgq5\" portCount=1\nI0622 22:26:26.870499 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hs7z4\" portCount=1\nI0622 22:26:26.884643 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-2qjrv\" portCount=1\nI0622 22:26:26.900902 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-knmqw\" portCount=1\nI0622 22:26:26.917821 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-8pjmq\" portCount=1\nI0622 22:26:26.929456 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hx78g\" portCount=1\nI0622 22:26:26.941000 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-d258l\" portCount=1\nI0622 22:26:26.956264 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-q5wkr\" portCount=1\nI0622 22:26:26.968145 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jwcjp\" portCount=1\nI0622 22:26:26.976035 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6nc4n\" portCount=1\nI0622 22:26:27.003001 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ghl7p\" portCount=1\nI0622 22:26:27.013048 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-sq86l\" portCount=1\nI0622 22:26:27.019522 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ndkms\" portCount=1\nI0622 22:26:27.036588 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mlpd5\" portCount=1\nI0622 22:26:27.038911 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wwj24\" portCount=1\nI0622 22:26:27.051284 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6ncth\" portCount=1\nI0622 22:26:27.064607 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-fdn7f\" portCount=1\nI0622 22:26:27.079372 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ft2rt\" portCount=1\nI0622 22:26:27.098530 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-c9zsc\" portCount=1\nI0622 22:26:27.111265 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mvwh4\" portCount=1\nI0622 22:26:27.143453 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-d9m54\" portCount=1\nI0622 22:26:27.149790 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hrd8w\" portCount=1\nI0622 22:26:27.161400 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wkcgk\" portCount=1\nI0622 22:26:27.172815 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-nrb8w\" portCount=1\nI0622 22:26:27.186956 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rk478\" portCount=1\nI0622 22:26:27.195534 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-8lh62\" portCount=1\nI0622 22:26:27.213404 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-2kdj7\" portCount=1\nI0622 22:26:27.220255 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-nn7cc\" portCount=1\nI0622 22:26:27.230684 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-vp6gj\" portCount=1\nI0622 22:26:27.249608 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-b8l2z\" portCount=1\nI0622 22:26:27.275298 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9jlfl\" portCount=1\nI0622 22:26:27.279679 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-95r7n\" portCount=1\nI0622 22:26:27.289939 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-brrdg\" portCount=1\nI0622 22:26:27.299697 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-76p8g\" portCount=1\nI0622 22:26:27.314533 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-drm4q\" portCount=1\nI0622 22:26:27.321631 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-sqgn4\" portCount=1\nI0622 22:26:27.331149 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jjmbb\" portCount=1\nI0622 22:26:27.359727 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-s2hsm\" portCount=1\nI0622 22:26:27.409563 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gk7qs\" portCount=1\nI0622 22:26:27.451384 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-qwz7d\" portCount=1\nI0622 22:26:27.500546 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-gpt84\" servicePort=\"100.65.144.240:80/TCP\"\nI0622 22:26:27.500583 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-hs7z4\" servicePort=\"100.65.94.70:80/TCP\"\nI0622 22:26:27.500598 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-c9zsc\" servicePort=\"100.67.187.85:80/TCP\"\nI0622 22:26:27.500613 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-nn7cc\" servicePort=\"100.71.57.204:80/TCP\"\nI0622 22:26:27.500627 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-76p8g\" servicePort=\"100.68.23.146:80/TCP\"\nI0622 22:26:27.500640 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-ctdgx\" servicePort=\"100.64.179.52:80/TCP\"\nI0622 22:26:27.500681 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-pdk5l\" servicePort=\"100.71.254.183:80/TCP\"\nI0622 22:26:27.500695 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-58zmc\" servicePort=\"100.70.147.35:80/TCP\"\nI0622 22:26:27.500709 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-bhgq5\" servicePort=\"100.65.235.65:80/TCP\"\nI0622 22:26:27.500727 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-6nc4n\" servicePort=\"100.67.24.68:80/TCP\"\nI0622 22:26:27.500741 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-sq86l\" servicePort=\"100.69.167.12:80/TCP\"\nI0622 22:26:27.500755 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-s2hsm\" servicePort=\"100.67.125.200:80/TCP\"\nI0622 22:26:27.500768 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-7txmw\" servicePort=\"100.67.95.114:80/TCP\"\nI0622 22:26:27.501130 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-dlvqf\" servicePort=\"100.70.119.41:80/TCP\"\nI0622 22:26:27.501163 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-dw9tc\" servicePort=\"100.64.236.96:80/TCP\"\nI0622 22:26:27.501177 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-wwj24\" servicePort=\"100.66.252.71:80/TCP\"\nI0622 22:26:27.501356 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-fdn7f\" servicePort=\"100.65.179.34:80/TCP\"\nI0622 22:26:27.501505 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-jjmbb\" servicePort=\"100.67.6.34:80/TCP\"\nI0622 22:26:27.501532 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-ghl7p\" servicePort=\"100.70.188.32:80/TCP\"\nI0622 22:26:27.501610 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-ndkms\" servicePort=\"100.66.169.105:80/TCP\"\nI0622 22:26:27.501734 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-ft2rt\" servicePort=\"100.68.162.202:80/TCP\"\nI0622 22:26:27.501787 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-mvwh4\" servicePort=\"100.69.141.0:80/TCP\"\nI0622 22:26:27.501942 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-drm4q\" servicePort=\"100.70.131.29:80/TCP\"\nI0622 22:26:27.501972 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-b8l2z\" servicePort=\"100.66.242.205:80/TCP\"\nI0622 22:26:27.501989 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-9jlfl\" servicePort=\"100.68.12.33:80/TCP\"\nI0622 22:26:27.502224 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-fw5cn\" servicePort=\"100.70.137.70:80/TCP\"\nI0622 22:26:27.502244 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-knmqw\" servicePort=\"100.69.222.147:80/TCP\"\nI0622 22:26:27.502259 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-mlpd5\" servicePort=\"100.66.187.25:80/TCP\"\nI0622 22:26:27.502274 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-hrd8w\" servicePort=\"100.67.219.222:80/TCP\"\nI0622 22:26:27.502288 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-rk478\" servicePort=\"100.68.4.66:80/TCP\"\nI0622 22:26:27.502302 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-2kdj7\" servicePort=\"100.68.42.56:80/TCP\"\nI0622 22:26:27.502315 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-b9cv5\" servicePort=\"100.65.53.191:80/TCP\"\nI0622 22:26:27.502367 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-d9m54\" servicePort=\"100.68.247.171:80/TCP\"\nI0622 22:26:27.502385 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-ds7k5\" servicePort=\"100.71.168.47:80/TCP\"\nI0622 22:26:27.502402 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-vp6gj\" servicePort=\"100.70.190.21:80/TCP\"\nI0622 22:26:27.502417 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-qwz7d\" servicePort=\"100.64.122.234:80/TCP\"\nI0622 22:26:27.502433 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-m7zds\" servicePort=\"100.69.136.32:80/TCP\"\nI0622 22:26:27.502448 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-qd5gp\" servicePort=\"100.66.99.220:80/TCP\"\nI0622 22:26:27.502464 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-2qjrv\" servicePort=\"100.65.211.189:80/TCP\"\nI0622 22:26:27.502478 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-8pjmq\" servicePort=\"100.67.130.89:80/TCP\"\nI0622 22:26:27.502495 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-wkcgk\" servicePort=\"100.67.252.191:80/TCP\"\nI0622 22:26:27.502509 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-8lh62\" servicePort=\"100.68.31.250:80/TCP\"\nI0622 22:26:27.502556 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-sqgn4\" servicePort=\"100.65.165.204:80/TCP\"\nI0622 22:26:27.502571 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-d258l\" servicePort=\"100.66.26.168:80/TCP\"\nI0622 22:26:27.502588 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-brrdg\" servicePort=\"100.68.114.176:80/TCP\"\nI0622 22:26:27.502602 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-7fvtd\" servicePort=\"100.69.123.253:80/TCP\"\nI0622 22:26:27.502619 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-xjgfb\" servicePort=\"100.70.63.199:80/TCP\"\nI0622 22:26:27.502635 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-jwcjp\" servicePort=\"100.65.48.111:80/TCP\"\nI0622 22:26:27.502660 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-nrb8w\" servicePort=\"100.64.180.83:80/TCP\"\nI0622 22:26:27.502676 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-btx55\" servicePort=\"100.68.40.174:80/TCP\"\nI0622 22:26:27.502691 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-x9f2h\" servicePort=\"100.67.191.104:80/TCP\"\nI0622 22:26:27.502706 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-7ft96\" servicePort=\"100.68.60.97:80/TCP\"\nI0622 22:26:27.502785 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-hx78g\" servicePort=\"100.64.230.165:80/TCP\"\nI0622 22:26:27.502806 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-gk7qs\" servicePort=\"100.68.125.219:80/TCP\"\nI0622 22:26:27.502821 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-67vtf\" servicePort=\"100.64.122.156:80/TCP\"\nI0622 22:26:27.502844 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-95r7n\" servicePort=\"100.66.98.115:80/TCP\"\nI0622 22:26:27.502864 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-48gk4\" servicePort=\"100.69.83.158:80/TCP\"\nI0622 22:26:27.502878 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-tgjxz\" servicePort=\"100.66.201.254:80/TCP\"\nI0622 22:26:27.502893 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-6ncth\" servicePort=\"100.68.175.204:80/TCP\"\nI0622 22:26:27.502907 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-xv9l6\" servicePort=\"100.64.12.218:80/TCP\"\nI0622 22:26:27.502922 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-qwpwx\" servicePort=\"100.65.31.43:80/TCP\"\nI0622 22:26:27.502938 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-q5wkr\" servicePort=\"100.65.232.57:80/TCP\"\nI0622 22:26:27.503657 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:27.519051 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-5ffm7\" portCount=1\nI0622 22:26:27.541042 10 proxier.go:1461] \"Reloading service iptables data\" numServices=67 numEndpoints=57 numFilterChains=4 numFilterRules=16 numNATChains=115 numNATRules=284\nI0622 22:26:27.551003 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-8mkzt\" portCount=1\nI0622 22:26:27.552298 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.810318ms\"\nI0622 22:26:27.600234 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-lvvmv\" portCount=1\nI0622 22:26:27.644972 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rj9sf\" portCount=1\nI0622 22:26:27.707398 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gl698\" portCount=1\nI0622 22:26:27.754103 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-8snq6\" portCount=1\nI0622 22:26:27.802088 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tbr7s\" portCount=1\nI0622 22:26:27.850664 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-brvgs\" portCount=1\nI0622 22:26:27.895932 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jcrsq\" portCount=1\nI0622 22:26:27.952105 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9vkgv\" portCount=1\nI0622 22:26:27.995345 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-txz6j\" portCount=1\nI0622 22:26:28.044276 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ntbjd\" portCount=1\nI0622 22:26:28.095930 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-4zgdc\" portCount=1\nI0622 22:26:28.149461 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-w7w7d\" portCount=1\nI0622 22:26:28.195446 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-89fb2\" portCount=1\nI0622 22:26:28.274709 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rsptv\" portCount=1\nI0622 22:26:28.301496 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7ngjg\" portCount=1\nI0622 22:26:28.342747 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tmz64\" portCount=1\nI0622 22:26:28.394561 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rmn5v\" portCount=1\nI0622 22:26:28.441524 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pdkbk\" portCount=1\nI0622 22:26:28.497120 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tmgvz\" portCount=1\nI0622 22:26:28.497411 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-7ngjg\" servicePort=\"100.66.11.181:80/TCP\"\nI0622 22:26:28.497618 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-tmgvz\" servicePort=\"100.64.88.73:80/TCP\"\nI0622 22:26:28.497642 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-5ffm7\" servicePort=\"100.68.52.235:80/TCP\"\nI0622 22:26:28.497659 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-brvgs\" servicePort=\"100.69.100.50:80/TCP\"\nI0622 22:26:28.497674 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-txz6j\" servicePort=\"100.66.162.117:80/TCP\"\nI0622 22:26:28.497687 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-89fb2\" servicePort=\"100.64.103.47:80/TCP\"\nI0622 22:26:28.497700 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-lvvmv\" servicePort=\"100.71.245.78:80/TCP\"\nI0622 22:26:28.497717 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-8snq6\" servicePort=\"100.70.31.70:80/TCP\"\nI0622 22:26:28.497731 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-jcrsq\" servicePort=\"100.66.194.44:80/TCP\"\nI0622 22:26:28.497745 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-rsptv\" servicePort=\"100.69.117.122:80/TCP\"\nI0622 22:26:28.497763 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-gl698\" servicePort=\"100.68.148.1:80/TCP\"\nI0622 22:26:28.497777 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-tbr7s\" servicePort=\"100.67.16.179:80/TCP\"\nI0622 22:26:28.497793 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-tmz64\" servicePort=\"100.71.123.242:80/TCP\"\nI0622 22:26:28.497807 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-pdkbk\" servicePort=\"100.64.54.151:80/TCP\"\nI0622 22:26:28.497824 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-4zgdc\" servicePort=\"100.69.145.185:80/TCP\"\nI0622 22:26:28.497837 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-w7w7d\" servicePort=\"100.67.42.19:80/TCP\"\nI0622 22:26:28.497850 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-rmn5v\" servicePort=\"100.71.178.221:80/TCP\"\nI0622 22:26:28.497865 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-8mkzt\" servicePort=\"100.67.109.124:80/TCP\"\nI0622 22:26:28.497880 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-rj9sf\" servicePort=\"100.68.68.51:80/TCP\"\nI0622 22:26:28.497894 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-9vkgv\" servicePort=\"100.66.103.167:80/TCP\"\nI0622 22:26:28.497908 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-ntbjd\" servicePort=\"100.68.84.218:80/TCP\"\nI0622 22:26:28.498218 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:28.539443 10 proxier.go:1461] \"Reloading service iptables data\" numServices=88 numEndpoints=76 numFilterChains=4 numFilterRules=18 numNATChains=153 numNATRules=379\nI0622 22:26:28.551602 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.250208ms\"\nI0622 22:26:28.553483 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-25t49\" portCount=1\nI0622 22:26:28.600043 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rm87b\" portCount=1\nI0622 22:26:28.654944 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-5486f\" portCount=1\nI0622 22:26:28.693719 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xww5f\" portCount=1\nI0622 22:26:28.743400 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gxpbc\" portCount=1\nI0622 22:26:28.798622 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-dxds5\" portCount=1\nI0622 22:26:28.860790 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-m9hn6\" portCount=1\nI0622 22:26:28.917183 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-5dw7l\" portCount=1\nI0622 22:26:28.944797 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xv68m\" portCount=1\nI0622 22:26:28.992235 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wlbvj\" portCount=1\nI0622 22:26:29.041918 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9hxfx\" portCount=1\nI0622 22:26:29.090402 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6hsk7\" portCount=1\nI0622 22:26:29.148976 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7tqt4\" portCount=1\nI0622 22:26:29.212949 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rf6rm\" portCount=1\nI0622 22:26:29.248393 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-k5z7g\" portCount=1\nI0622 22:26:29.301394 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-lrghq\" portCount=1\nI0622 22:26:29.352086 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9k24j\" portCount=1\nI0622 22:26:29.393116 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-c7q82\" portCount=1\nI0622 22:26:29.444076 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xm46t\" portCount=1\nI0622 22:26:29.499010 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-4sxvd\" portCount=1\nI0622 22:26:29.499066 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-9k24j\" servicePort=\"100.71.160.55:80/TCP\"\nI0622 22:26:29.499083 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-5486f\" servicePort=\"100.71.251.129:80/TCP\"\nI0622 22:26:29.499096 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-dxds5\" servicePort=\"100.70.106.247:80/TCP\"\nI0622 22:26:29.499109 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-m9hn6\" servicePort=\"100.64.195.130:80/TCP\"\nI0622 22:26:29.499125 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-xv68m\" servicePort=\"100.67.186.176:80/TCP\"\nI0622 22:26:29.499138 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-6hsk7\" servicePort=\"100.66.24.74:80/TCP\"\nI0622 22:26:29.499152 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-xww5f\" servicePort=\"100.69.76.171:80/TCP\"\nI0622 22:26:29.499166 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-gxpbc\" servicePort=\"100.67.17.82:80/TCP\"\nI0622 22:26:29.499190 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-5dw7l\" servicePort=\"100.69.138.26:80/TCP\"\nI0622 22:26:29.499206 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-rf6rm\" servicePort=\"100.69.147.176:80/TCP\"\nI0622 22:26:29.499218 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-4sxvd\" servicePort=\"100.69.117.41:80/TCP\"\nI0622 22:26:29.499232 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-rm87b\" servicePort=\"100.64.15.143:80/TCP\"\nI0622 22:26:29.499245 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-wlbvj\" servicePort=\"100.66.90.24:80/TCP\"\nI0622 22:26:29.499258 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-9hxfx\" servicePort=\"100.67.140.137:80/TCP\"\nI0622 22:26:29.499270 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-7tqt4\" servicePort=\"100.64.106.57:80/TCP\"\nI0622 22:26:29.499282 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-lrghq\" servicePort=\"100.67.55.200:80/TCP\"\nI0622 22:26:29.499297 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-25t49\" servicePort=\"100.68.148.207:80/TCP\"\nI0622 22:26:29.499329 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-k5z7g\" servicePort=\"100.68.117.60:80/TCP\"\nI0622 22:26:29.499344 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-c7q82\" servicePort=\"100.68.180.164:80/TCP\"\nI0622 22:26:29.499361 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-xm46t\" servicePort=\"100.67.1.211:80/TCP\"\nI0622 22:26:29.499718 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:29.543710 10 proxier.go:1461] \"Reloading service iptables data\" numServices=108 numEndpoints=96 numFilterChains=4 numFilterRules=18 numNATChains=193 numNATRules=479\nI0622 22:26:29.552665 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ctzc6\" portCount=1\nI0622 22:26:29.558718 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.659894ms\"\nI0622 22:26:29.614872 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-p8dv9\" portCount=1\nI0622 22:26:29.657037 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-dpgs7\" portCount=1\nI0622 22:26:29.696851 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-59z9t\" portCount=1\nI0622 22:26:29.755230 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-nwrv5\" portCount=1\nI0622 22:26:29.816284 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-zs9d9\" portCount=1\nI0622 22:26:29.865194 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-926sz\" portCount=1\nI0622 22:26:29.906956 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-g67qk\" portCount=1\nI0622 22:26:29.961634 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-2gqb7\" portCount=1\nI0622 22:26:29.998319 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-fttlv\" portCount=1\nI0622 22:26:30.043294 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-b2dmb\" portCount=1\nI0622 22:26:30.091166 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wwp6x\" portCount=1\nI0622 22:26:30.161610 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-n4pkl\" portCount=1\nI0622 22:26:30.216520 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jbgzv\" portCount=1\nI0622 22:26:30.243147 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-x8wk5\" portCount=1\nI0622 22:26:30.293353 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pbrgj\" portCount=1\nI0622 22:26:30.371269 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-589f6\" portCount=1\nI0622 22:26:30.418553 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9wdvd\" portCount=1\nI0622 22:26:30.465909 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-767kf\" portCount=1\nI0622 22:26:30.512334 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-nwrv5\" servicePort=\"100.64.215.251:80/TCP\"\nI0622 22:26:30.512459 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-wwp6x\" servicePort=\"100.65.140.230:80/TCP\"\nI0622 22:26:30.512515 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-n4pkl\" servicePort=\"100.64.241.68:80/TCP\"\nI0622 22:26:30.512539 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-x8wk5\" servicePort=\"100.65.31.252:80/TCP\"\nI0622 22:26:30.512558 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-9wdvd\" servicePort=\"100.70.118.4:80/TCP\"\nI0622 22:26:30.512577 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-dpgs7\" servicePort=\"100.64.15.247:80/TCP\"\nI0622 22:26:30.512622 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-59z9t\" servicePort=\"100.70.186.22:80/TCP\"\nI0622 22:26:30.512649 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-fttlv\" servicePort=\"100.66.26.2:80/TCP\"\nI0622 22:26:30.512670 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-jbgzv\" servicePort=\"100.68.138.15:80/TCP\"\nI0622 22:26:30.512684 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-589f6\" servicePort=\"100.70.71.224:80/TCP\"\nI0622 22:26:30.512700 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-ctzc6\" servicePort=\"100.65.18.63:80/TCP\"\nI0622 22:26:30.512753 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-p8dv9\" servicePort=\"100.71.204.247:80/TCP\"\nI0622 22:26:30.512791 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-926sz\" servicePort=\"100.67.235.133:80/TCP\"\nI0622 22:26:30.512964 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-g67qk\" servicePort=\"100.70.2.105:80/TCP\"\nI0622 22:26:30.512988 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-b2dmb\" servicePort=\"100.64.103.56:80/TCP\"\nI0622 22:26:30.513003 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-zs9d9\" servicePort=\"100.71.184.143:80/TCP\"\nI0622 22:26:30.513017 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-2gqb7\" servicePort=\"100.70.236.42:80/TCP\"\nI0622 22:26:30.513032 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-pbrgj\" servicePort=\"100.64.167.155:80/TCP\"\nI0622 22:26:30.513075 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-767kf\" servicePort=\"100.70.107.227:80/TCP\"\nI0622 22:26:30.516111 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:30.532692 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6kjlj\" portCount=1\nI0622 22:26:30.582265 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hb9hk\" portCount=1\nI0622 22:26:30.598158 10 proxier.go:1461] \"Reloading service iptables data\" numServices=127 numEndpoints=117 numFilterChains=4 numFilterRules=16 numNATChains=235 numNATRules=584\nI0622 22:26:30.615102 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"102.786411ms\"\nI0622 22:26:30.624085 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-npjvl\" portCount=1\nI0622 22:26:30.659166 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jckgx\" portCount=1\nI0622 22:26:30.692207 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tx8w2\" portCount=1\nI0622 22:26:30.746916 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-68fv4\" portCount=1\nI0622 22:26:30.806365 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-8b4t4\" portCount=1\nI0622 22:26:30.844358 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-2j7c4\" portCount=1\nI0622 22:26:30.898235 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7vm9c\" portCount=1\nI0622 22:26:30.944034 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gddvq\" portCount=1\nI0622 22:26:30.990345 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hv7cd\" portCount=1\nI0622 22:26:31.048655 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mkhrz\" portCount=1\nI0622 22:26:31.092976 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-r9qpx\" portCount=1\nI0622 22:26:31.143188 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-r5n4d\" portCount=1\nI0622 22:26:31.192451 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hmzvp\" portCount=1\nI0622 22:26:31.240970 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gtb88\" portCount=1\nI0622 22:26:31.293127 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mrvc8\" portCount=1\nI0622 22:26:31.339148 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-74cpx\" portCount=1\nI0622 22:26:31.393374 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-zxfqr\" portCount=1\nI0622 22:26:31.444030 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jqhrg\" portCount=1\nI0622 22:26:31.500467 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-gtb88\" servicePort=\"100.64.131.6:80/TCP\"\nI0622 22:26:31.500507 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-jqhrg\" servicePort=\"100.66.88.35:80/TCP\"\nI0622 22:26:31.500522 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-npjvl\" servicePort=\"100.70.47.58:80/TCP\"\nI0622 22:26:31.500890 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-tx8w2\" servicePort=\"100.71.125.230:80/TCP\"\nI0622 22:26:31.500932 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-68fv4\" servicePort=\"100.70.91.38:80/TCP\"\nI0622 22:26:31.501068 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-2j7c4\" servicePort=\"100.70.119.143:80/TCP\"\nI0622 22:26:31.501230 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-gddvq\" servicePort=\"100.66.193.237:80/TCP\"\nI0622 22:26:31.501463 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-hb9hk\" servicePort=\"100.67.66.54:80/TCP\"\nI0622 22:26:31.501605 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-hv7cd\" servicePort=\"100.65.125.27:80/TCP\"\nI0622 22:26:31.501633 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-r9qpx\" servicePort=\"100.71.22.54:80/TCP\"\nI0622 22:26:31.501648 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-74cpx\" servicePort=\"100.67.232.203:80/TCP\"\nI0622 22:26:31.501662 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-6kjlj\" servicePort=\"100.65.101.67:80/TCP\"\nI0622 22:26:31.501678 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-8b4t4\" servicePort=\"100.67.145.36:80/TCP\"\nI0622 22:26:31.501699 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-r5n4d\" servicePort=\"100.67.108.82:80/TCP\"\nI0622 22:26:31.501743 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-hmzvp\" servicePort=\"100.70.190.22:80/TCP\"\nI0622 22:26:31.501800 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-mrvc8\" servicePort=\"100.65.20.24:80/TCP\"\nI0622 22:26:31.501855 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-jckgx\" servicePort=\"100.69.17.231:80/TCP\"\nI0622 22:26:31.501909 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-7vm9c\" servicePort=\"100.68.66.160:80/TCP\"\nI0622 22:26:31.502284 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-mkhrz\" servicePort=\"100.70.128.170:80/TCP\"\nI0622 22:26:31.502367 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-zxfqr\" servicePort=\"100.70.154.215:80/TCP\"\nI0622 22:26:31.502499 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-q62cg\" portCount=1\nI0622 22:26:31.502950 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:31.549099 10 proxier.go:1461] \"Reloading service iptables data\" numServices=147 numEndpoints=137 numFilterChains=4 numFilterRules=16 numNATChains=275 numNATRules=684\nI0622 22:26:31.556058 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-47zfb\" portCount=1\nI0622 22:26:31.568483 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.03694ms\"\nI0622 22:26:31.601023 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rfxf7\" portCount=1\nI0622 22:26:31.646621 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-htcdr\" portCount=1\nI0622 22:26:31.692019 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-h5h6d\" portCount=1\nI0622 22:26:31.758805 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tstvl\" portCount=1\nI0622 22:26:31.795858 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-v457r\" portCount=1\nI0622 22:26:31.854567 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pfn95\" portCount=1\nI0622 22:26:31.895028 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-t7zcm\" portCount=1\nI0622 22:26:31.942765 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-vlptw\" portCount=1\nI0622 22:26:31.999440 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6ckdb\" portCount=1\nI0622 22:26:32.051668 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-q78kd\" portCount=1\nI0622 22:26:32.093390 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-n42xk\" portCount=1\nI0622 22:26:32.142626 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-zfsd6\" portCount=1\nI0622 22:26:32.193236 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mmxgt\" portCount=1\nI0622 22:26:32.257376 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-r6t6n\" portCount=1\nI0622 22:26:32.290221 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wnbgn\" portCount=1\nI0622 22:26:32.345760 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-d4vw4\" portCount=1\nI0622 22:26:32.395126 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-szn9q\" portCount=1\nI0622 22:26:32.444588 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xtt4r\" portCount=1\nI0622 22:26:32.493213 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-57rqs\" portCount=1\nI0622 22:26:32.493270 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-d4vw4\" servicePort=\"100.65.59.159:80/TCP\"\nI0622 22:26:32.493507 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-szn9q\" servicePort=\"100.67.145.67:80/TCP\"\nI0622 22:26:32.493637 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-v457r\" servicePort=\"100.71.225.98:80/TCP\"\nI0622 22:26:32.493656 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-zfsd6\" servicePort=\"100.68.25.148:80/TCP\"\nI0622 22:26:32.493674 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-tstvl\" servicePort=\"100.70.199.233:80/TCP\"\nI0622 22:26:32.493796 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-t7zcm\" servicePort=\"100.64.64.95:80/TCP\"\nI0622 22:26:32.493820 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-q78kd\" servicePort=\"100.64.201.68:80/TCP\"\nI0622 22:26:32.494112 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-n42xk\" servicePort=\"100.71.10.198:80/TCP\"\nI0622 22:26:32.494135 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-mmxgt\" servicePort=\"100.69.245.202:80/TCP\"\nI0622 22:26:32.494196 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-wnbgn\" servicePort=\"100.65.199.203:80/TCP\"\nI0622 22:26:32.494216 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-q62cg\" servicePort=\"100.67.222.214:80/TCP\"\nI0622 22:26:32.494296 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-rfxf7\" servicePort=\"100.68.32.65:80/TCP\"\nI0622 22:26:32.494316 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-57rqs\" servicePort=\"100.66.65.182:80/TCP\"\nI0622 22:26:32.494330 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-vlptw\" servicePort=\"100.69.208.141:80/TCP\"\nI0622 22:26:32.494387 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-htcdr\" servicePort=\"100.69.32.143:80/TCP\"\nI0622 22:26:32.494406 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-h5h6d\" servicePort=\"100.68.41.26:80/TCP\"\nI0622 22:26:32.494464 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-6ckdb\" servicePort=\"100.71.207.206:80/TCP\"\nI0622 22:26:32.494483 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-r6t6n\" servicePort=\"100.68.0.160:80/TCP\"\nI0622 22:26:32.494535 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-xtt4r\" servicePort=\"100.70.37.47:80/TCP\"\nI0622 22:26:32.494555 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-47zfb\" servicePort=\"100.65.29.122:80/TCP\"\nI0622 22:26:32.494570 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-pfn95\" servicePort=\"100.67.211.62:80/TCP\"\nI0622 22:26:32.495173 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:32.543021 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-whzwd\" portCount=1\nI0622 22:26:32.544993 10 proxier.go:1461] \"Reloading service iptables data\" numServices=168 numEndpoints=156 numFilterChains=4 numFilterRules=18 numNATChains=313 numNATRules=779\nI0622 22:26:32.566201 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"72.93323ms\"\nI0622 22:26:32.603916 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mf9f2\" portCount=1\nI0622 22:26:32.646948 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-65d75\" portCount=1\nI0622 22:26:32.692954 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wv9c8\" portCount=1\nI0622 22:26:32.750061 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rdcrn\" portCount=1\nI0622 22:26:32.795501 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gnhxm\" portCount=1\nI0622 22:26:32.844029 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jms8q\" portCount=1\nI0622 22:26:32.894839 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-fv8sh\" portCount=1\nI0622 22:26:32.952608 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-lvttm\" portCount=1\nI0622 22:26:33.006674 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7q2jm\" portCount=1\nI0622 22:26:33.049719 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-kp7l7\" portCount=1\nI0622 22:26:33.098281 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pq9vf\" portCount=1\nI0622 22:26:33.153858 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-w7blq\" portCount=1\nI0622 22:26:33.207335 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xpbjm\" portCount=1\nI0622 22:26:33.262640 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ghkwq\" portCount=1\nI0622 22:26:33.294371 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wvwp4\" portCount=1\nI0622 22:26:33.362496 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-5h2s8\" portCount=1\nI0622 22:26:33.397999 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-dkr82\" portCount=1\nI0622 22:26:33.447006 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9svc4\" portCount=1\nI0622 22:26:33.519091 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-kp7l7\" servicePort=\"100.68.106.239:80/TCP\"\nI0622 22:26:33.519128 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-w7blq\" servicePort=\"100.69.185.82:80/TCP\"\nI0622 22:26:33.519144 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-ghkwq\" servicePort=\"100.66.247.94:80/TCP\"\nI0622 22:26:33.519160 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-mf9f2\" servicePort=\"100.71.48.156:80/TCP\"\nI0622 22:26:33.519444 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-65d75\" servicePort=\"100.65.64.114:80/TCP\"\nI0622 22:26:33.519508 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-gnhxm\" servicePort=\"100.65.210.222:80/TCP\"\nI0622 22:26:33.519658 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-fv8sh\" servicePort=\"100.67.243.123:80/TCP\"\nI0622 22:26:33.519808 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-lvttm\" servicePort=\"100.64.53.220:80/TCP\"\nI0622 22:26:33.519838 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-7q2jm\" servicePort=\"100.67.116.255:80/TCP\"\nI0622 22:26:33.519853 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-xpbjm\" servicePort=\"100.71.124.168:80/TCP\"\nI0622 22:26:33.519905 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-wvwp4\" servicePort=\"100.66.156.86:80/TCP\"\nI0622 22:26:33.519923 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-whzwd\" servicePort=\"100.65.4.80:80/TCP\"\nI0622 22:26:33.519937 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-wv9c8\" servicePort=\"100.71.218.55:80/TCP\"\nI0622 22:26:33.519952 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-rdcrn\" servicePort=\"100.68.0.242:80/TCP\"\nI0622 22:26:33.519966 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-jms8q\" servicePort=\"100.71.163.184:80/TCP\"\nI0622 22:26:33.519979 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-5h2s8\" servicePort=\"100.71.247.47:80/TCP\"\nI0622 22:26:33.520029 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-dkr82\" servicePort=\"100.66.78.252:80/TCP\"\nI0622 22:26:33.520051 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-9svc4\" servicePort=\"100.68.101.116:80/TCP\"\nI0622 22:26:33.520092 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-pq9vf\" servicePort=\"100.69.65.174:80/TCP\"\nI0622 22:26:33.520624 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:33.556180 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mj4n9\" portCount=1\nI0622 22:26:33.576435 10 proxier.go:1461] \"Reloading service iptables data\" numServices=187 numEndpoints=177 numFilterChains=4 numFilterRules=16 numNATChains=355 numNATRules=884\nI0622 22:26:33.576834 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-4rrr6\" portCount=1\nI0622 22:26:33.600646 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"81.576692ms\"\nI0622 22:26:33.616071 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pxw6w\" portCount=1\nI0622 22:26:33.690096 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-qwgsh\" portCount=1\nI0622 22:26:33.722800 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jv9cn\" portCount=1\nI0622 22:26:33.775402 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rnnpd\" portCount=1\nI0622 22:26:33.813893 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tkqgz\" portCount=1\nI0622 22:26:33.845106 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-b4fdm\" portCount=1\nI0622 22:26:33.903818 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hk5cw\" portCount=1\nI0622 22:26:33.943126 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jtsx5\" portCount=1\nI0622 22:26:33.993852 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mzc5x\" portCount=1\nI0622 22:26:34.041653 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hjpj7\" portCount=1\nI0622 22:26:34.091306 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-zcchc\" portCount=1\nI0622 22:26:34.142341 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pjmd7\" portCount=1\nI0622 22:26:34.188817 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-sjsbz\" portCount=1\nI0622 22:26:34.241105 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-vcsgn\" portCount=1\nI0622 22:26:34.292526 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-x6vjc\" portCount=1\nI0622 22:26:34.341411 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ksczm\" portCount=1\nI0622 22:26:34.501018 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-pxw6w\" servicePort=\"100.64.162.101:80/TCP\"\nI0622 22:26:34.501054 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-qwgsh\" servicePort=\"100.65.11.201:80/TCP\"\nI0622 22:26:34.501069 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-jv9cn\" servicePort=\"100.68.217.92:80/TCP\"\nI0622 22:26:34.501084 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-rnnpd\" servicePort=\"100.64.105.215:80/TCP\"\nI0622 22:26:34.501097 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-jtsx5\" servicePort=\"100.67.36.196:80/TCP\"\nI0622 22:26:34.501109 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-4rrr6\" servicePort=\"100.70.69.102:80/TCP\"\nI0622 22:26:34.501123 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-hk5cw\" servicePort=\"100.64.197.29:80/TCP\"\nI0622 22:26:34.501136 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-mzc5x\" servicePort=\"100.64.136.169:80/TCP\"\nI0622 22:26:34.501149 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-pjmd7\" servicePort=\"100.64.7.247:80/TCP\"\nI0622 22:26:34.501163 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-b4fdm\" servicePort=\"100.70.161.225:80/TCP\"\nI0622 22:26:34.501178 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-hjpj7\" servicePort=\"100.64.57.101:80/TCP\"\nI0622 22:26:34.501196 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-x6vjc\" servicePort=\"100.65.191.96:80/TCP\"\nI0622 22:26:34.501218 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-mj4n9\" servicePort=\"100.67.176.111:80/TCP\"\nI0622 22:26:34.501234 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-tkqgz\" servicePort=\"100.70.132.209:80/TCP\"\nI0622 22:26:34.501249 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-zcchc\" servicePort=\"100.69.188.161:80/TCP\"\nI0622 22:26:34.501264 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-sjsbz\" servicePort=\"100.68.20.36:80/TCP\"\nI0622 22:26:34.501277 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-vcsgn\" servicePort=\"100.70.141.180:80/TCP\"\nI0622 22:26:34.501289 10 service.go:437] \"Adding new service port\" portName=\"svc-latency-7896/latency-svc-ksczm\" servicePort=\"100.65.250.118:80/TCP\"\nI0622 22:26:34.501738 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:34.555542 10 proxier.go:1461] \"Reloading service iptables data\" numServices=205 numEndpoints=197 numFilterChains=4 numFilterRules=14 numNATChains=395 numNATRules=984\nI0622 22:26:34.581256 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"80.267639ms\"\nI0622 22:26:35.581771 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:35.638782 10 proxier.go:1461] \"Reloading service iptables data\" numServices=205 numEndpoints=208 numFilterChains=4 numFilterRules=3 numNATChains=417 numNATRules=1039\nI0622 22:26:35.664933 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"83.506667ms\"\nI0622 22:26:40.246312 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:40.410330 10 proxier.go:1461] \"Reloading service iptables data\" numServices=205 numEndpoints=208 numFilterChains=4 numFilterRules=8 numNATChains=417 numNATRules=1024\nI0622 22:26:40.441024 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"195.235595ms\"\nI0622 22:26:40.441696 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:40.508563 10 proxier.go:1461] \"Reloading service iptables data\" numServices=205 numEndpoints=204 numFilterChains=4 numFilterRules=29 numNATChains=407 numNATRules=951\nI0622 22:26:40.535032 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"93.956027ms\"\nI0622 22:26:41.249466 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:41.305059 10 proxier.go:1461] \"Reloading service iptables data\" numServices=205 numEndpoints=75 numFilterChains=4 numFilterRules=154 numNATChains=365 numNATRules=534\nI0622 22:26:41.320616 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"72.563482ms\"\nI0622 22:26:41.737085 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-25t49\" portCount=0\nI0622 22:26:41.756178 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-2gqb7\" portCount=0\nI0622 22:26:41.769815 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-2j7c4\" portCount=0\nI0622 22:26:41.783089 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-2kdj7\" portCount=0\nI0622 22:26:41.795709 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-2qjrv\" portCount=0\nI0622 22:26:41.812026 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-47zfb\" portCount=0\nI0622 22:26:41.845786 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-48gk4\" portCount=0\nI0622 22:26:41.862801 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-4rrr6\" portCount=0\nI0622 22:26:41.886704 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-4sxvd\" portCount=0\nI0622 22:26:41.901969 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-4zgdc\" portCount=0\nI0622 22:26:41.919028 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-5486f\" portCount=0\nI0622 22:26:41.945871 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-57rqs\" portCount=0\nI0622 22:26:41.962970 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-589f6\" portCount=0\nI0622 22:26:41.990957 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-58zmc\" portCount=0\nI0622 22:26:42.011987 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-59z9t\" portCount=0\nI0622 22:26:42.027049 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-5dw7l\" portCount=0\nI0622 22:26:42.054467 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-5ffm7\" portCount=0\nI0622 22:26:42.077143 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-5h2s8\" portCount=0\nI0622 22:26:42.098784 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-65d75\" portCount=0\nI0622 22:26:42.110753 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-67vtf\" portCount=0\nI0622 22:26:42.132836 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-68fv4\" portCount=0\nI0622 22:26:42.152020 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6ckdb\" portCount=0\nI0622 22:26:42.168167 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6hsk7\" portCount=0\nI0622 22:26:42.191244 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6kjlj\" portCount=0\nI0622 22:26:42.208053 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6nc4n\" portCount=0\nI0622 22:26:42.230451 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6ncth\" portCount=0\nI0622 22:26:42.252360 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-6q89h\" portCount=0\nI0622 22:26:42.252709 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-2kdj7\"\nI0622 22:26:42.252806 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-2qjrv\"\nI0622 22:26:42.252890 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-5dw7l\"\nI0622 22:26:42.252942 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-47zfb\"\nI0622 22:26:42.253032 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-48gk4\"\nI0622 22:26:42.253114 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-4rrr6\"\nI0622 22:26:42.253160 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-58zmc\"\nI0622 22:26:42.253252 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-5h2s8\"\nI0622 22:26:42.253347 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-65d75\"\nI0622 22:26:42.253471 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-6q89h\"\nI0622 22:26:42.253564 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-2j7c4\"\nI0622 22:26:42.253660 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-589f6\"\nI0622 22:26:42.253722 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-67vtf\"\nI0622 22:26:42.253853 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-68fv4\"\nI0622 22:26:42.254070 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-6ncth\"\nI0622 22:26:42.254170 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-59z9t\"\nI0622 22:26:42.254260 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-6ckdb\"\nI0622 22:26:42.254354 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-5486f\"\nI0622 22:26:42.254445 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-6hsk7\"\nI0622 22:26:42.254534 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-25t49\"\nI0622 22:26:42.254592 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-6nc4n\"\nI0622 22:26:42.254678 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-4sxvd\"\nI0622 22:26:42.254777 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-4zgdc\"\nI0622 22:26:42.254849 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-6kjlj\"\nI0622 22:26:42.254947 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-2gqb7\"\nI0622 22:26:42.255041 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-57rqs\"\nI0622 22:26:42.255128 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-5ffm7\"\nI0622 22:26:42.256384 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:42.274707 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-74cpx\" portCount=0\nI0622 22:26:42.288639 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-767kf\" portCount=0\nI0622 22:26:42.304127 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-76p8g\" portCount=0\nI0622 22:26:42.329251 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7ft96\" portCount=0\nI0622 22:26:42.334877 10 proxier.go:1461] \"Reloading service iptables data\" numServices=178 numEndpoints=7 numFilterChains=4 numFilterRules=177 numNATChains=115 numNATRules=134\nI0622 22:26:42.350970 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7fvtd\" portCount=0\nI0622 22:26:42.351860 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"99.327498ms\"\nI0622 22:26:42.372749 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7ngjg\" portCount=0\nI0622 22:26:42.387382 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7q2jm\" portCount=0\nI0622 22:26:42.416051 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7tqt4\" portCount=0\nI0622 22:26:42.441026 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7txmw\" portCount=0\nI0622 22:26:42.453213 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-7vm9c\" portCount=0\nI0622 22:26:42.492740 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-89fb2\" portCount=0\nI0622 22:26:42.510158 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-8b4t4\" portCount=0\nI0622 22:26:42.533915 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-8lh62\" portCount=0\nI0622 22:26:42.545073 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-8mkzt\" portCount=0\nI0622 22:26:42.563273 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-8pjmq\" portCount=0\nI0622 22:26:42.576900 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-8snq6\" portCount=0\nI0622 22:26:42.595799 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-926sz\" portCount=0\nI0622 22:26:42.606514 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-95r7n\" portCount=0\nI0622 22:26:42.618010 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9hxfx\" portCount=0\nI0622 22:26:42.640640 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9jlfl\" portCount=0\nI0622 22:26:42.649727 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9k24j\" portCount=0\nI0622 22:26:42.661938 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9svc4\" portCount=0\nI0622 22:26:42.693815 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9vkgv\" portCount=0\nI0622 22:26:42.712482 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-9wdvd\" portCount=0\nI0622 22:26:42.730382 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-b2dmb\" portCount=0\nI0622 22:26:42.746717 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-b4fdm\" portCount=0\nI0622 22:26:42.758183 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-b8l2z\" portCount=0\nI0622 22:26:42.777029 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-b9cv5\" portCount=0\nI0622 22:26:42.802145 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-bhgq5\" portCount=0\nI0622 22:26:42.817056 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-brrdg\" portCount=0\nI0622 22:26:42.834723 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-brvgs\" portCount=0\nI0622 22:26:42.850357 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-btx55\" portCount=0\nI0622 22:26:42.861237 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-c7q82\" portCount=0\nI0622 22:26:42.880678 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-c9zsc\" portCount=0\nI0622 22:26:42.890645 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ctdgx\" portCount=0\nI0622 22:26:42.906305 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ctzc6\" portCount=0\nI0622 22:26:42.920263 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-d258l\" portCount=0\nI0622 22:26:42.934021 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-d4vw4\" portCount=0\nI0622 22:26:42.947160 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-d9m54\" portCount=0\nI0622 22:26:42.969228 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-dkr82\" portCount=0\nI0622 22:26:42.984757 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-dlvqf\" portCount=0\nI0622 22:26:43.000615 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-dpgs7\" portCount=0\nI0622 22:26:43.025562 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-drm4q\" portCount=0\nI0622 22:26:43.038486 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ds7k5\" portCount=0\nI0622 22:26:43.052636 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-dw9tc\" portCount=0\nI0622 22:26:43.067596 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-dxds5\" portCount=0\nI0622 22:26:43.080475 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-fdn7f\" portCount=0\nI0622 22:26:43.091085 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ft2rt\" portCount=0\nI0622 22:26:43.108812 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-fttlv\" portCount=0\nI0622 22:26:43.135671 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-fv8sh\" portCount=0\nI0622 22:26:43.147305 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-fw5cn\" portCount=0\nI0622 22:26:43.172715 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-g67qk\" portCount=0\nI0622 22:26:43.185405 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gddvq\" portCount=0\nI0622 22:26:43.200182 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ghkwq\" portCount=0\nI0622 22:26:43.216474 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ghl7p\" portCount=0\nI0622 22:26:43.237949 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gk7qs\" portCount=0\nI0622 22:26:43.272734 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gl698\" portCount=0\nI0622 22:26:43.272831 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-c9zsc\"\nI0622 22:26:43.272856 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-dxds5\"\nI0622 22:26:43.273297 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-7vm9c\"\nI0622 22:26:43.273361 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-8snq6\"\nI0622 22:26:43.273399 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-9vkgv\"\nI0622 22:26:43.273410 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-b2dmb\"\nI0622 22:26:43.273420 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-btx55\"\nI0622 22:26:43.273481 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-7ngjg\"\nI0622 22:26:43.273646 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-89fb2\"\nI0622 22:26:43.273672 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-b9cv5\"\nI0622 22:26:43.273719 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-brrdg\"\nI0622 22:26:43.273731 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-b8l2z\"\nI0622 22:26:43.273743 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-ghl7p\"\nI0622 22:26:43.273753 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-dkr82\"\nI0622 22:26:43.273790 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-drm4q\"\nI0622 22:26:43.273803 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-dw9tc\"\nI0622 22:26:43.273813 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-fv8sh\"\nI0622 22:26:43.273823 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-fw5cn\"\nI0622 22:26:43.273834 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-767kf\"\nI0622 22:26:43.274031 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-d258l\"\nI0622 22:26:43.274092 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-ft2rt\"\nI0622 22:26:43.274119 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-74cpx\"\nI0622 22:26:43.274130 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-7txmw\"\nI0622 22:26:43.274206 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-dpgs7\"\nI0622 22:26:43.274395 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-gddvq\"\nI0622 22:26:43.274411 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-7q2jm\"\nI0622 22:26:43.274422 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-ctdgx\"\nI0622 22:26:43.274561 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-ctzc6\"\nI0622 22:26:43.274585 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-d9m54\"\nI0622 22:26:43.274596 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-fttlv\"\nI0622 22:26:43.274606 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-7fvtd\"\nI0622 22:26:43.274672 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-95r7n\"\nI0622 22:26:43.274692 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-d4vw4\"\nI0622 22:26:43.274703 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-gk7qs\"\nI0622 22:26:43.274809 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-7ft96\"\nI0622 22:26:43.274822 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-fdn7f\"\nI0622 22:26:43.274872 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-8b4t4\"\nI0622 22:26:43.274910 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-b4fdm\"\nI0622 22:26:43.274922 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-8pjmq\"\nI0622 22:26:43.274984 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-ds7k5\"\nI0622 22:26:43.274996 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-926sz\"\nI0622 22:26:43.275020 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-g67qk\"\nI0622 22:26:43.275090 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-8lh62\"\nI0622 22:26:43.275102 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-9hxfx\"\nI0622 22:26:43.275112 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-9k24j\"\nI0622 22:26:43.275121 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-c7q82\"\nI0622 22:26:43.275460 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-9jlfl\"\nI0622 22:26:43.275470 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-9wdvd\"\nI0622 22:26:43.275678 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-bhgq5\"\nI0622 22:26:43.275701 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-76p8g\"\nI0622 22:26:43.275874 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-ghkwq\"\nI0622 22:26:43.275896 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-gl698\"\nI0622 22:26:43.275906 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-7tqt4\"\nI0622 22:26:43.275915 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-8mkzt\"\nI0622 22:26:43.276104 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-9svc4\"\nI0622 22:26:43.276120 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-brvgs\"\nI0622 22:26:43.276129 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-dlvqf\"\nI0622 22:26:43.276432 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:43.307326 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gnhxm\" portCount=0\nI0622 22:26:43.322659 10 proxier.go:1461] \"Reloading service iptables data\" numServices=121 numEndpoints=7 numFilterChains=4 numFilterRules=120 numNATChains=15 numNATRules=34\nI0622 22:26:43.326789 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gpt84\" portCount=0\nI0622 22:26:43.330068 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.271748ms\"\nI0622 22:26:43.355187 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gtb88\" portCount=0\nI0622 22:26:43.375670 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-gxpbc\" portCount=0\nI0622 22:26:43.390842 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-h5h6d\" portCount=0\nI0622 22:26:43.404264 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hb9hk\" portCount=0\nI0622 22:26:43.422513 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hjpj7\" portCount=0\nI0622 22:26:43.454562 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hk5cw\" portCount=0\nI0622 22:26:43.474935 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hmzvp\" portCount=0\nI0622 22:26:43.498098 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hrd8w\" portCount=0\nI0622 22:26:43.539007 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hs7z4\" portCount=0\nI0622 22:26:43.578678 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-htcdr\" portCount=0\nI0622 22:26:43.620228 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hv7cd\" portCount=0\nI0622 22:26:43.648882 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-hx78g\" portCount=0\nI0622 22:26:43.684816 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jbgzv\" portCount=0\nI0622 22:26:43.711035 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jckgx\" portCount=0\nI0622 22:26:43.729767 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jcrsq\" portCount=0\nI0622 22:26:43.750428 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jjmbb\" portCount=0\nI0622 22:26:43.817321 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jms8q\" portCount=0\nI0622 22:26:43.834072 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jqhrg\" portCount=0\nI0622 22:26:43.852207 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jtsx5\" portCount=0\nI0622 22:26:43.874762 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jv9cn\" portCount=0\nI0622 22:26:43.893341 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-jwcjp\" portCount=0\nI0622 22:26:43.921023 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-k5z7g\" portCount=0\nI0622 22:26:43.937012 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-knmqw\" portCount=0\nI0622 22:26:43.966059 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-kp7l7\" portCount=0\nI0622 22:26:43.993142 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ksczm\" portCount=0\nI0622 22:26:44.032036 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-lrghq\" portCount=0\nI0622 22:26:44.066803 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-lvttm\" portCount=0\nI0622 22:26:44.099128 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-lvvmv\" portCount=0\nI0622 22:26:44.126347 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-m7zds\" portCount=0\nI0622 22:26:44.151683 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-m9hn6\" portCount=0\nI0622 22:26:44.177187 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mf9f2\" portCount=0\nI0622 22:26:44.208260 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mj4n9\" portCount=0\nI0622 22:26:44.232546 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mkhrz\" portCount=0\nI0622 22:26:44.253363 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mlpd5\" portCount=0\nI0622 22:26:44.253605 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-mj4n9\"\nI0622 22:26:44.253631 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-hb9hk\"\nI0622 22:26:44.253665 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-jbgzv\"\nI0622 22:26:44.253677 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-knmqw\"\nI0622 22:26:44.253688 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-mf9f2\"\nI0622 22:26:44.253700 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-h5h6d\"\nI0622 22:26:44.253731 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-jqhrg\"\nI0622 22:26:44.253778 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-jtsx5\"\nI0622 22:26:44.253790 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-k5z7g\"\nI0622 22:26:44.253802 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-jms8q\"\nI0622 22:26:44.253813 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-lvvmv\"\nI0622 22:26:44.253837 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-lvttm\"\nI0622 22:26:44.253903 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-mlpd5\"\nI0622 22:26:44.253926 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-gxpbc\"\nI0622 22:26:44.253937 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-hrd8w\"\nI0622 22:26:44.253948 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-htcdr\"\nI0622 22:26:44.253958 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-hv7cd\"\nI0622 22:26:44.253970 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-gpt84\"\nI0622 22:26:44.253984 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-jcrsq\"\nI0622 22:26:44.254002 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-jjmbb\"\nI0622 22:26:44.254013 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-jwcjp\"\nI0622 22:26:44.254024 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-ksczm\"\nI0622 22:26:44.254036 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-gtb88\"\nI0622 22:26:44.254047 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-hjpj7\"\nI0622 22:26:44.254059 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-hk5cw\"\nI0622 22:26:44.254072 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-hmzvp\"\nI0622 22:26:44.254089 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-m9hn6\"\nI0622 22:26:44.254100 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-hs7z4\"\nI0622 22:26:44.254110 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-hx78g\"\nI0622 22:26:44.254121 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-jv9cn\"\nI0622 22:26:44.254132 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-mkhrz\"\nI0622 22:26:44.254144 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-m7zds\"\nI0622 22:26:44.254160 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-gnhxm\"\nI0622 22:26:44.254170 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-jckgx\"\nI0622 22:26:44.254182 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-kp7l7\"\nI0622 22:26:44.254193 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-lrghq\"\nI0622 22:26:44.254297 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:44.289498 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mmxgt\" portCount=0\nI0622 22:26:44.296308 10 proxier.go:1461] \"Reloading service iptables data\" numServices=85 numEndpoints=7 numFilterChains=4 numFilterRules=84 numNATChains=15 numNATRules=34\nI0622 22:26:44.303082 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.509426ms\"\nI0622 22:26:44.382121 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mrvc8\" portCount=0\nI0622 22:26:44.467603 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mvwh4\" portCount=0\nI0622 22:26:44.542844 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-mzc5x\" portCount=0\nI0622 22:26:44.597906 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-n42xk\" portCount=0\nI0622 22:26:44.650145 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-n4pkl\" portCount=0\nI0622 22:26:44.682623 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ndkms\" portCount=0\nI0622 22:26:44.707192 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-nn7cc\" portCount=0\nI0622 22:26:44.719455 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-npjvl\" portCount=0\nI0622 22:26:44.734442 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-nrb8w\" portCount=0\nI0622 22:26:44.746661 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-ntbjd\" portCount=0\nI0622 22:26:44.760570 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-nwrv5\" portCount=0\nI0622 22:26:44.785854 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-p8dv9\" portCount=0\nI0622 22:26:44.801681 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pbrgj\" portCount=0\nI0622 22:26:44.816623 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pdk5l\" portCount=0\nI0622 22:26:44.851988 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pdkbk\" portCount=0\nI0622 22:26:44.895780 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pfn95\" portCount=0\nI0622 22:26:44.920055 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pjmd7\" portCount=0\nI0622 22:26:44.938261 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pq9vf\" portCount=0\nI0622 22:26:44.957675 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-pxw6w\" portCount=0\nI0622 22:26:44.978422 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-q5wkr\" portCount=0\nI0622 22:26:45.004229 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-q62cg\" portCount=0\nI0622 22:26:45.020121 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-q78kd\" portCount=0\nI0622 22:26:45.036485 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-qd5gp\" portCount=0\nI0622 22:26:45.055058 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-qwgsh\" portCount=0\nI0622 22:26:45.071001 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-qwpwx\" portCount=0\nI0622 22:26:45.084331 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-qwz7d\" portCount=0\nI0622 22:26:45.106088 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-r5n4d\" portCount=0\nI0622 22:26:45.124256 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-r6t6n\" portCount=0\nI0622 22:26:45.140874 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-r9qpx\" portCount=0\nI0622 22:26:45.153461 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rdcrn\" portCount=0\nI0622 22:26:45.169008 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rf6rm\" portCount=0\nI0622 22:26:45.180843 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rfxf7\" portCount=0\nI0622 22:26:45.192231 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rj9sf\" portCount=0\nI0622 22:26:45.209416 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rk478\" portCount=0\nI0622 22:26:45.230230 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rm87b\" portCount=0\nI0622 22:26:45.246811 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rmn5v\" portCount=0\nI0622 22:26:45.246880 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-pfn95\"\nI0622 22:26:45.246896 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-r9qpx\"\nI0622 22:26:45.246907 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-ntbjd\"\nI0622 22:26:45.246919 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-nwrv5\"\nI0622 22:26:45.246930 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-p8dv9\"\nI0622 22:26:45.246941 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-rfxf7\"\nI0622 22:26:45.246952 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-nn7cc\"\nI0622 22:26:45.246963 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-pdk5l\"\nI0622 22:26:45.246973 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-rdcrn\"\nI0622 22:26:45.246985 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-qwpwx\"\nI0622 22:26:45.246996 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-n42xk\"\nI0622 22:26:45.247007 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-n4pkl\"\nI0622 22:26:45.247017 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-q78kd\"\nI0622 22:26:45.247028 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-rf6rm\"\nI0622 22:26:45.247038 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-rj9sf\"\nI0622 22:26:45.247049 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-mvwh4\"\nI0622 22:26:45.247059 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-ndkms\"\nI0622 22:26:45.247070 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-npjvl\"\nI0622 22:26:45.247080 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-nrb8w\"\nI0622 22:26:45.247091 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-pjmd7\"\nI0622 22:26:45.247102 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-rk478\"\nI0622 22:26:45.247113 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-mmxgt\"\nI0622 22:26:45.247122 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-q5wkr\"\nI0622 22:26:45.247133 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-rm87b\"\nI0622 22:26:45.247142 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-r5n4d\"\nI0622 22:26:45.247152 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-mrvc8\"\nI0622 22:26:45.247163 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-pxw6w\"\nI0622 22:26:45.247174 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-qd5gp\"\nI0622 22:26:45.247185 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-pq9vf\"\nI0622 22:26:45.247197 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-q62cg\"\nI0622 22:26:45.247208 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-qwgsh\"\nI0622 22:26:45.247218 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-qwz7d\"\nI0622 22:26:45.247228 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-r6t6n\"\nI0622 22:26:45.247238 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-mzc5x\"\nI0622 22:26:45.247249 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-pbrgj\"\nI0622 22:26:45.247262 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-pdkbk\"\nI0622 22:26:45.247272 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-rmn5v\"\nI0622 22:26:45.247371 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:45.270987 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rnnpd\" portCount=0\nI0622 22:26:45.284524 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-rsptv\" portCount=0\nI0622 22:26:45.300483 10 proxier.go:1461] \"Reloading service iptables data\" numServices=48 numEndpoints=7 numFilterChains=4 numFilterRules=47 numNATChains=15 numNATRules=34\nI0622 22:26:45.305391 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-s2hsm\" portCount=0\nI0622 22:26:45.310571 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.69647ms\"\nI0622 22:26:45.331020 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-sjsbz\" portCount=0\nI0622 22:26:45.347144 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-sq86l\" portCount=0\nI0622 22:26:45.356374 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-sqgn4\" portCount=0\nI0622 22:26:45.368617 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-szn9q\" portCount=0\nI0622 22:26:45.385262 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-t7zcm\" portCount=0\nI0622 22:26:45.400007 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tbr7s\" portCount=0\nI0622 22:26:45.414873 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tgjxz\" portCount=0\nI0622 22:26:45.436566 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tkqgz\" portCount=0\nI0622 22:26:45.448056 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tmgvz\" portCount=0\nI0622 22:26:45.461862 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tmz64\" portCount=0\nI0622 22:26:45.518011 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tstvl\" portCount=0\nI0622 22:26:45.575295 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-tx8w2\" portCount=0\nI0622 22:26:45.617417 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-txz6j\" portCount=0\nI0622 22:26:45.627368 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-v457r\" portCount=0\nI0622 22:26:45.644527 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-vcsgn\" portCount=0\nI0622 22:26:45.675927 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-vlptw\" portCount=0\nI0622 22:26:45.692313 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-vp6gj\" portCount=0\nI0622 22:26:45.706127 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-w7blq\" portCount=0\nI0622 22:26:45.728465 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-w7w7d\" portCount=0\nI0622 22:26:45.750535 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-whzwd\" portCount=0\nI0622 22:26:45.765066 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wkcgk\" portCount=0\nI0622 22:26:45.786964 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wlbvj\" portCount=0\nI0622 22:26:45.804562 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wnbgn\" portCount=0\nI0622 22:26:45.819661 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wv9c8\" portCount=0\nI0622 22:26:45.833318 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wvwp4\" portCount=0\nI0622 22:26:45.847227 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wwj24\" portCount=0\nI0622 22:26:45.872440 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-wwp6x\" portCount=0\nI0622 22:26:45.886388 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-x6vjc\" portCount=0\nI0622 22:26:45.903480 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-x8wk5\" portCount=0\nI0622 22:26:45.917705 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-x9f2h\" portCount=0\nI0622 22:26:45.933910 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xjgfb\" portCount=0\nI0622 22:26:45.946519 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xm46t\" portCount=0\nI0622 22:26:45.968451 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xpbjm\" portCount=0\nI0622 22:26:45.993928 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xtt4r\" portCount=0\nI0622 22:26:46.014863 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xv68m\" portCount=0\nI0622 22:26:46.038434 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xv9l6\" portCount=0\nI0622 22:26:46.055438 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-xww5f\" portCount=0\nI0622 22:26:46.083874 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-zcchc\" portCount=0\nI0622 22:26:46.115189 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-zfsd6\" portCount=0\nI0622 22:26:46.129104 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-zs9d9\" portCount=0\nI0622 22:26:46.146511 10 service.go:322] \"Service updated ports\" service=\"svc-latency-7896/latency-svc-zxfqr\" portCount=0\nI0622 22:26:46.310863 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-vcsgn\"\nI0622 22:26:46.310900 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-wv9c8\"\nI0622 22:26:46.310913 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-x8wk5\"\nI0622 22:26:46.310924 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-x9f2h\"\nI0622 22:26:46.310935 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-tx8w2\"\nI0622 22:26:46.310946 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-w7w7d\"\nI0622 22:26:46.310957 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-whzwd\"\nI0622 22:26:46.310968 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-wkcgk\"\nI0622 22:26:46.310979 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-wvwp4\"\nI0622 22:26:46.310988 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-xjgfb\"\nI0622 22:26:46.310998 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-zcchc\"\nI0622 22:26:46.311008 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-zfsd6\"\nI0622 22:26:46.311021 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-s2hsm\"\nI0622 22:26:46.311032 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-sqgn4\"\nI0622 22:26:46.311043 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-tbr7s\"\nI0622 22:26:46.311054 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-zxfqr\"\nI0622 22:26:46.311065 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-szn9q\"\nI0622 22:26:46.311074 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-t7zcm\"\nI0622 22:26:46.311084 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-tmgvz\"\nI0622 22:26:46.311094 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-v457r\"\nI0622 22:26:46.311104 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-wnbgn\"\nI0622 22:26:46.311114 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-wwp6x\"\nI0622 22:26:46.311124 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-rnnpd\"\nI0622 22:26:46.311134 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-sq86l\"\nI0622 22:26:46.311144 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-tgjxz\"\nI0622 22:26:46.311154 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-tkqgz\"\nI0622 22:26:46.311165 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-tmz64\"\nI0622 22:26:46.311175 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-xtt4r\"\nI0622 22:26:46.311184 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-xv68m\"\nI0622 22:26:46.311195 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-tstvl\"\nI0622 22:26:46.311208 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-txz6j\"\nI0622 22:26:46.311218 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-vp6gj\"\nI0622 22:26:46.311228 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-wlbvj\"\nI0622 22:26:46.311238 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-wwj24\"\nI0622 22:26:46.311249 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-zs9d9\"\nI0622 22:26:46.311259 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-xm46t\"\nI0622 22:26:46.311268 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-xv9l6\"\nI0622 22:26:46.311279 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-rsptv\"\nI0622 22:26:46.311291 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-sjsbz\"\nI0622 22:26:46.311302 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-vlptw\"\nI0622 22:26:46.311314 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-w7blq\"\nI0622 22:26:46.311325 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-x6vjc\"\nI0622 22:26:46.311336 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-xpbjm\"\nI0622 22:26:46.311346 10 service.go:462] \"Removing service port\" portName=\"svc-latency-7896/latency-svc-xww5f\"\nI0622 22:26:46.311437 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:26:46.356138 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:26:46.360961 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.163387ms\"\nI0622 22:27:13.490261 10 service.go:322] \"Service updated ports\" service=\"webhook-262/e2e-test-webhook\" portCount=1\nI0622 22:27:13.490325 10 service.go:437] \"Adding new service port\" portName=\"webhook-262/e2e-test-webhook\" servicePort=\"100.69.189.90:8443/TCP\"\nI0622 22:27:13.490537 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:27:13.524806 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:27:13.529906 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.587494ms\"\nI0622 22:27:13.530221 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:27:13.564613 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:27:13.570109 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.153218ms\"\nI0622 22:27:17.039759 10 service.go:322] \"Service updated ports\" service=\"webhook-262/e2e-test-webhook\" portCount=0\nI0622 22:27:17.039811 10 service.go:462] \"Removing service port\" portName=\"webhook-262/e2e-test-webhook\"\nI0622 22:27:17.039930 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:27:17.075587 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 22:27:17.080490 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.678468ms\"\nI0622 22:27:17.080788 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:27:17.120857 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:27:17.136379 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.8292ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-nodes-us-east1-b-3xs4 ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-nodes-us-east1-b-t83b ====\n2022/06/22 22:10:50 Running command:\nCommand env: (log-file=/var/log/kube-proxy.log, also-stdout=true, redirect-stderr=true)\nRun from directory: \nExecutable path: /usr/local/bin/kube-proxy\nArgs (comma-delimited): /usr/local/bin/kube-proxy,--cluster-cidr=100.96.0.0/11,--conntrack-max-per-core=131072,--hostname-override=nodes-us-east1-b-t83b,--kubeconfig=/var/lib/kube-proxy/kubeconfig,--master=https://api.internal.e2e-e2e-kops-gce-stable.k8s.local,--oom-score-adj=-998,--v=2\n2022/06/22 22:10:50 Now listening for interrupts\nI0622 22:10:50.581433 10 flags.go:64] FLAG: --add-dir-header=\"false\"\nI0622 22:10:50.581544 10 flags.go:64] FLAG: --alsologtostderr=\"false\"\nI0622 22:10:50.581573 10 flags.go:64] FLAG: --bind-address=\"0.0.0.0\"\nI0622 22:10:50.582011 10 flags.go:64] FLAG: --bind-address-hard-fail=\"false\"\nI0622 22:10:50.582021 10 flags.go:64] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0622 22:10:50.582029 10 flags.go:64] FLAG: --cleanup=\"false\"\nI0622 22:10:50.582035 10 flags.go:64] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0622 22:10:50.582044 10 flags.go:64] FLAG: --config=\"\"\nI0622 22:10:50.582049 10 flags.go:64] FLAG: --config-sync-period=\"15m0s\"\nI0622 22:10:50.582179 10 flags.go:64] FLAG: --conntrack-max-per-core=\"131072\"\nI0622 22:10:50.582196 10 flags.go:64] FLAG: --conntrack-min=\"131072\"\nI0622 22:10:50.582201 10 flags.go:64] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0622 22:10:50.582207 10 flags.go:64] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0622 22:10:50.582213 10 flags.go:64] FLAG: --detect-local-mode=\"\"\nI0622 22:10:50.582219 10 flags.go:64] FLAG: --feature-gates=\"\"\nI0622 22:10:50.582227 10 flags.go:64] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0622 22:10:50.582238 10 flags.go:64] FLAG: --healthz-port=\"10256\"\nI0622 22:10:50.582245 10 flags.go:64] FLAG: --help=\"false\"\nI0622 22:10:50.582256 10 flags.go:64] FLAG: --hostname-override=\"nodes-us-east1-b-t83b\"\nI0622 22:10:50.582262 10 flags.go:64] FLAG: --iptables-masquerade-bit=\"14\"\nI0622 22:10:50.582267 10 flags.go:64] FLAG: --iptables-min-sync-period=\"1s\"\nI0622 22:10:50.582273 10 flags.go:64] FLAG: --iptables-sync-period=\"30s\"\nI0622 22:10:50.582279 10 flags.go:64] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0622 22:10:50.582303 10 flags.go:64] FLAG: --ipvs-min-sync-period=\"0s\"\nI0622 22:10:50.582309 10 flags.go:64] FLAG: --ipvs-scheduler=\"\"\nI0622 22:10:50.582314 10 flags.go:64] FLAG: --ipvs-strict-arp=\"false\"\nI0622 22:10:50.582319 10 flags.go:64] FLAG: --ipvs-sync-period=\"30s\"\nI0622 22:10:50.582325 10 flags.go:64] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0622 22:10:50.582330 10 flags.go:64] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0622 22:10:50.582336 10 flags.go:64] FLAG: --ipvs-udp-timeout=\"0s\"\nI0622 22:10:50.582344 10 flags.go:64] FLAG: --kube-api-burst=\"10\"\nI0622 22:10:50.582350 10 flags.go:64] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0622 22:10:50.582356 10 flags.go:64] FLAG: --kube-api-qps=\"5\"\nI0622 22:10:50.582369 10 flags.go:64] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0622 22:10:50.582375 10 flags.go:64] FLAG: --log-backtrace-at=\":0\"\nI0622 22:10:50.582385 10 flags.go:64] FLAG: --log-dir=\"\"\nI0622 22:10:50.582400 10 flags.go:64] FLAG: --log-file=\"\"\nI0622 22:10:50.582406 10 flags.go:64] FLAG: --log-file-max-size=\"1800\"\nI0622 22:10:50.582412 10 flags.go:64] FLAG: --log-flush-frequency=\"5s\"\nI0622 22:10:50.582418 10 flags.go:64] FLAG: --logtostderr=\"true\"\nI0622 22:10:50.582424 10 flags.go:64] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0622 22:10:50.582432 10 flags.go:64] FLAG: --masquerade-all=\"false\"\nI0622 22:10:50.582438 10 flags.go:64] FLAG: --master=\"https://api.internal.e2e-e2e-kops-gce-stable.k8s.local\"\nI0622 22:10:50.582448 10 flags.go:64] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0622 22:10:50.582454 10 flags.go:64] FLAG: --metrics-port=\"10249\"\nI0622 22:10:50.582468 10 flags.go:64] FLAG: --nodeport-addresses=\"[]\"\nI0622 22:10:50.582479 10 flags.go:64] FLAG: --one-output=\"false\"\nI0622 22:10:50.582485 10 flags.go:64] FLAG: --oom-score-adj=\"-998\"\nI0622 22:10:50.582491 10 flags.go:64] FLAG: --pod-bridge-interface=\"\"\nI0622 22:10:50.582496 10 flags.go:64] FLAG: --pod-interface-name-prefix=\"\"\nI0622 22:10:50.582505 10 flags.go:64] FLAG: --profiling=\"false\"\nI0622 22:10:50.582511 10 flags.go:64] FLAG: --proxy-mode=\"\"\nI0622 22:10:50.582526 10 flags.go:64] FLAG: --proxy-port-range=\"\"\nI0622 22:10:50.582533 10 flags.go:64] FLAG: --show-hidden-metrics-for-version=\"\"\nI0622 22:10:50.582538 10 flags.go:64] FLAG: --skip-headers=\"false\"\nI0622 22:10:50.582543 10 flags.go:64] FLAG: --skip-log-headers=\"false\"\nI0622 22:10:50.582548 10 flags.go:64] FLAG: --stderrthreshold=\"2\"\nI0622 22:10:50.582557 10 flags.go:64] FLAG: --udp-timeout=\"250ms\"\nI0622 22:10:50.582563 10 flags.go:64] FLAG: --v=\"2\"\nI0622 22:10:50.582569 10 flags.go:64] FLAG: --version=\"false\"\nI0622 22:10:50.582577 10 flags.go:64] FLAG: --vmodule=\"\"\nI0622 22:10:50.582582 10 flags.go:64] FLAG: --write-config-to=\"\"\nI0622 22:10:50.582607 10 server.go:231] \"Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP\"\nI0622 22:10:50.582742 10 feature_gate.go:245] feature gates: &{map[]}\nI0622 22:10:50.583428 10 feature_gate.go:245] feature gates: &{map[]}\nE0622 22:10:50.646551 10 node.go:152] Failed to retrieve node info: Get \"https://api.internal.e2e-e2e-kops-gce-stable.k8s.local/api/v1/nodes/nodes-us-east1-b-t83b\": dial tcp: lookup api.internal.e2e-e2e-kops-gce-stable.k8s.local on 169.254.169.254:53: no such host\nE0622 22:10:51.793910 10 node.go:152] Failed to retrieve node info: Get \"https://api.internal.e2e-e2e-kops-gce-stable.k8s.local/api/v1/nodes/nodes-us-east1-b-t83b\": dial tcp: lookup api.internal.e2e-e2e-kops-gce-stable.k8s.local on 169.254.169.254:53: no such host\nE0622 22:10:54.005543 10 node.go:152] Failed to retrieve node info: Get \"https://api.internal.e2e-e2e-kops-gce-stable.k8s.local/api/v1/nodes/nodes-us-east1-b-t83b\": dial tcp: lookup api.internal.e2e-e2e-kops-gce-stable.k8s.local on 169.254.169.254:53: no such host\nE0622 22:10:58.199122 10 node.go:152] Failed to retrieve node info: nodes \"nodes-us-east1-b-t83b\" not found\nI0622 22:11:07.714763 10 node.go:163] Successfully retrieved node IP: 10.0.16.3\nI0622 22:11:07.714806 10 server_others.go:138] \"Detected node IP\" address=\"10.0.16.3\"\nI0622 22:11:07.714834 10 server_others.go:578] \"Unknown proxy mode, assuming iptables proxy\" proxyMode=\"\"\nI0622 22:11:07.714936 10 server_others.go:175] \"DetectLocalMode\" LocalMode=\"ClusterCIDR\"\nI0622 22:11:07.757385 10 server_others.go:206] \"Using iptables Proxier\"\nI0622 22:11:07.757442 10 server_others.go:213] \"kube-proxy running in dual-stack mode\" ipFamily=IPv4\nI0622 22:11:07.757458 10 server_others.go:214] \"Creating dualStackProxier for iptables\"\nI0622 22:11:07.757475 10 server_others.go:501] \"Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\"\nI0622 22:11:07.757501 10 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0622 22:11:07.757620 10 utils.go:431] \"Changed sysctl\" name=\"net/ipv4/conf/all/route_localnet\" before=0 after=1\nI0622 22:11:07.757671 10 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0622 22:11:07.757697 10 proxier.go:319] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0622 22:11:07.757723 10 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv4\nI0622 22:11:07.757735 10 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0622 22:11:07.757784 10 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0622 22:11:07.757812 10 proxier.go:319] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0622 22:11:07.757832 10 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv6\nI0622 22:11:07.757943 10 server.go:661] \"Version info\" version=\"v1.25.0-alpha.1\"\nI0622 22:11:07.757950 10 server.go:663] \"Golang settings\" GOGC=\"\" GOMAXPROCS=\"\" GOTRACEBACK=\"\"\nI0622 22:11:07.758882 10 conntrack.go:52] \"Setting nf_conntrack_max\" nf_conntrack_max=262144\nI0622 22:11:07.758945 10 conntrack.go:100] \"Set sysctl\" entry=\"net/netfilter/nf_conntrack_tcp_timeout_close_wait\" value=3600\nI0622 22:11:07.760140 10 config.go:317] \"Starting service config controller\"\nI0622 22:11:07.760158 10 shared_informer.go:255] Waiting for caches to sync for service config\nI0622 22:11:07.760187 10 config.go:226] \"Starting endpoint slice config controller\"\nI0622 22:11:07.760194 10 shared_informer.go:255] Waiting for caches to sync for endpoint slice config\nI0622 22:11:07.760780 10 config.go:444] \"Starting node config controller\"\nI0622 22:11:07.760788 10 shared_informer.go:255] Waiting for caches to sync for node config\nI0622 22:11:07.763548 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 22:11:07.763668 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 22:11:07.763801 10 service.go:322] \"Service updated ports\" service=\"default/kubernetes\" portCount=1\nI0622 22:11:07.763887 10 service.go:322] \"Service updated ports\" service=\"kube-system/kube-dns\" portCount=3\nI0622 22:11:07.861077 10 shared_informer.go:262] Caches are synced for service config\nI0622 22:11:07.861077 10 shared_informer.go:262] Caches are synced for node config\nI0622 22:11:07.861125 10 shared_informer.go:262] Caches are synced for endpoint slice config\nI0622 22:11:07.861153 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 22:11:07.861168 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 22:11:07.861236 10 service.go:437] \"Adding new service port\" portName=\"default/kubernetes:https\" servicePort=\"100.64.0.1:443/TCP\"\nI0622 22:11:07.861254 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns\" servicePort=\"100.64.0.10:53/UDP\"\nI0622 22:11:07.861269 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns-tcp\" servicePort=\"100.64.0.10:53/TCP\"\nI0622 22:11:07.861282 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:metrics\" servicePort=\"100.64.0.10:9153/TCP\"\nI0622 22:11:07.861318 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:11:07.925297 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 22:11:07.938139 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"76.940301ms\"\nI0622 22:11:07.938164 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:11:08.019095 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 22:11:08.021376 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"83.21167ms\"\nI0622 22:11:11.066842 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:11:11.102397 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 22:11:11.106819 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.989521ms\"\nI0622 22:11:11.106860 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:11:11.133744 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 22:11:11.135730 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.862309ms\"\nI0622 22:11:12.503084 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:11:12.537209 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=4 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 22:11:12.541341 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.347728ms\"\nI0622 22:11:13.210371 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:11:13.241897 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 22:11:13.245713 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.445099ms\"\nI0622 22:11:13.522617 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI0622 22:11:13.522669 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:11:13.556818 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:11:13.569325 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.945136ms\"\nI0622 22:14:28.060687 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:28.102023 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:14:28.107091 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.448613ms\"\nI0622 22:14:28.181806 10 service.go:322] \"Service updated ports\" service=\"endpointslice-7752/example-int-port\" portCount=1\nI0622 22:14:28.181856 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-7752/example-int-port:example\" servicePort=\"100.67.71.200:80/TCP\"\nI0622 22:14:28.181877 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:28.218384 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:14:28.225515 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.665637ms\"\nI0622 22:14:28.245287 10 service.go:322] \"Service updated ports\" service=\"services-1292/svc-tolerate-unready\" portCount=1\nI0622 22:14:28.323731 10 service.go:322] \"Service updated ports\" service=\"endpointslice-7752/example-named-port\" portCount=1\nI0622 22:14:28.400250 10 service.go:322] \"Service updated ports\" service=\"endpointslice-7752/example-no-match\" portCount=1\nI0622 22:14:29.225723 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-7752/example-no-match:example-no-match\" servicePort=\"100.68.141.149:80/TCP\"\nI0622 22:14:29.225760 10 service.go:437] \"Adding new service port\" portName=\"services-1292/svc-tolerate-unready:http\" servicePort=\"100.68.86.88:80/TCP\"\nI0622 22:14:29.225776 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-7752/example-named-port:http\" servicePort=\"100.66.175.94:80/TCP\"\nI0622 22:14:29.225824 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:29.271883 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=8 numNATChains=15 numNATRules=34\nI0622 22:14:29.278909 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.230211ms\"\nI0622 22:14:41.704048 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:41.754703 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=18 numNATRules=42\nI0622 22:14:41.764466 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.455719ms\"\nI0622 22:14:43.371009 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:43.428848 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=18 numNATRules=42\nI0622 22:14:43.435161 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.160444ms\"\nI0622 22:14:43.435209 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:43.477869 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 22:14:43.481098 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.887957ms\"\nI0622 22:14:51.227927 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:51.263533 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=18 numNATRules=42\nI0622 22:14:51.269046 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.15573ms\"\nI0622 22:14:52.300689 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:52.336306 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=47\nI0622 22:14:52.341470 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.818121ms\"\nI0622 22:14:52.522256 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:52.563876 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=47\nI0622 22:14:52.569357 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.232799ms\"\nI0622 22:14:53.570576 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:53.608049 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=55\nI0622 22:14:53.613341 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.897402ms\"\nI0622 22:14:54.614413 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:54.650007 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=55\nI0622 22:14:54.655820 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.52089ms\"\nI0622 22:14:55.178572 10 service.go:322] \"Service updated ports\" service=\"services-4345/sourceip-test\" portCount=1\nI0622 22:14:55.258106 10 service.go:322] \"Service updated ports\" service=\"services-9672/service-headless-toggled\" portCount=1\nI0622 22:14:55.656462 10 service.go:437] \"Adding new service port\" portName=\"services-4345/sourceip-test\" servicePort=\"100.67.117.150:8080/TCP\"\nI0622 22:14:55.656613 10 service.go:437] \"Adding new service port\" portName=\"services-9672/service-headless-toggled\" servicePort=\"100.65.250.168:80/TCP\"\nI0622 22:14:55.656702 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:55.699822 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=55\nI0622 22:14:55.705093 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.665782ms\"\nI0622 22:14:56.793052 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:56.843492 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=55\nI0622 22:14:56.850906 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.912922ms\"\nI0622 22:14:58.704108 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:14:58.752758 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=60\nI0622 22:14:58.764524 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.47583ms\"\nI0622 22:15:06.223729 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:06.259936 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=63\nI0622 22:15:06.265719 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.061006ms\"\nI0622 22:15:08.766803 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:08.818277 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=61\nI0622 22:15:08.828854 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.114766ms\"\nI0622 22:15:08.828929 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:08.878376 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=57\nI0622 22:15:08.885465 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.57177ms\"\nI0622 22:15:09.774173 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:09.824257 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=60\nI0622 22:15:09.830175 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.054408ms\"\nI0622 22:15:10.830797 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:10.874666 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=28 numNATRules=68\nI0622 22:15:10.883940 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.241112ms\"\nI0622 22:15:11.920743 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:11.969249 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=28 numNATRules=68\nI0622 22:15:11.976434 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.718624ms\"\nI0622 22:15:11.976482 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:12.012730 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 22:15:12.015711 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.231215ms\"\nI0622 22:15:12.123555 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:12.162919 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=71\nI0622 22:15:12.169072 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.747301ms\"\nI0622 22:15:24.154819 10 service.go:322] \"Service updated ports\" service=\"endpointslice-7752/example-int-port\" portCount=0\nI0622 22:15:24.154874 10 service.go:462] \"Removing service port\" portName=\"endpointslice-7752/example-int-port:example\"\nI0622 22:15:24.155473 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:24.196173 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=68\nI0622 22:15:24.201017 10 service.go:322] \"Service updated ports\" service=\"endpointslice-7752/example-named-port\" portCount=0\nI0622 22:15:24.203497 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.626159ms\"\nI0622 22:15:24.203532 10 service.go:462] \"Removing service port\" portName=\"endpointslice-7752/example-named-port:http\"\nI0622 22:15:24.203592 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:24.210815 10 service.go:322] \"Service updated ports\" service=\"endpointslice-7752/example-no-match\" portCount=0\nI0622 22:15:24.238973 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=27 numNATRules=61\nI0622 22:15:24.244886 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.356874ms\"\nI0622 22:15:25.245067 10 service.go:462] \"Removing service port\" portName=\"endpointslice-7752/example-no-match:example-no-match\"\nI0622 22:15:25.245167 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:25.279506 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=58\nI0622 22:15:25.284814 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.777124ms\"\nI0622 22:15:26.425902 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:26.465770 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=58\nI0622 22:15:26.471713 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.87744ms\"\nI0622 22:15:26.885077 10 service.go:322] \"Service updated ports\" service=\"services-4345/sourceip-test\" portCount=0\nI0622 22:15:27.472567 10 service.go:462] \"Removing service port\" portName=\"services-4345/sourceip-test\"\nI0622 22:15:27.472653 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:27.520300 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=55\nI0622 22:15:27.525408 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.889414ms\"\nI0622 22:15:39.611372 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:39.653868 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=53\nI0622 22:15:39.659249 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.904324ms\"\nI0622 22:15:39.659290 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:39.692732 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 22:15:39.694685 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.39617ms\"\nI0622 22:15:42.591019 10 service.go:322] \"Service updated ports\" service=\"services-2806/svc-not-tolerate-unready\" portCount=1\nI0622 22:15:42.591097 10 service.go:437] \"Adding new service port\" portName=\"services-2806/svc-not-tolerate-unready:http\" servicePort=\"100.66.228.172:80/TCP\"\nI0622 22:15:42.591123 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:42.631467 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 22:15:42.637798 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.727998ms\"\nI0622 22:15:42.637859 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:42.673760 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 22:15:42.679056 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.221273ms\"\nI0622 22:15:43.796919 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:43.834262 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 22:15:43.839948 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.079098ms\"\nI0622 22:15:44.840556 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:44.878568 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=61\nI0622 22:15:44.884083 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.625483ms\"\nI0622 22:15:51.796692 10 service.go:322] \"Service updated ports\" service=\"services-9672/service-headless-toggled\" portCount=0\nI0622 22:15:51.796753 10 service.go:462] \"Removing service port\" portName=\"services-9672/service-headless-toggled\"\nI0622 22:15:51.796794 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:51.862484 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=54\nI0622 22:15:51.868464 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.708772ms\"\nI0622 22:15:55.001927 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:55.054110 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=45\nI0622 22:15:55.060869 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.027298ms\"\nI0622 22:15:57.636064 10 service.go:322] \"Service updated ports\" service=\"kubectl-1665/rm2\" portCount=1\nI0622 22:15:57.636126 10 service.go:437] \"Adding new service port\" portName=\"kubectl-1665/rm2\" servicePort=\"100.69.213.114:1234/TCP\"\nI0622 22:15:57.636159 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:57.674206 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=18 numNATRules=42\nI0622 22:15:57.679277 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.159716ms\"\nI0622 22:15:57.679354 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:15:57.726220 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=47\nI0622 22:15:57.731405 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.091093ms\"\nI0622 22:15:59.964724 10 service.go:322] \"Service updated ports\" service=\"kubectl-1665/rm3\" portCount=1\nI0622 22:15:59.964781 10 service.go:437] \"Adding new service port\" portName=\"kubectl-1665/rm3\" servicePort=\"100.71.205.129:2345/TCP\"\nI0622 22:15:59.964906 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:00.005834 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=20 numNATRules=47\nI0622 22:16:00.010962 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.185389ms\"\nI0622 22:16:00.011030 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:00.045834 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=52\nI0622 22:16:00.051239 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.242797ms\"\nI0622 22:16:00.576611 10 service.go:322] \"Service updated ports\" service=\"services-9672/service-headless-toggled\" portCount=1\nI0622 22:16:01.051465 10 service.go:437] \"Adding new service port\" portName=\"services-9672/service-headless-toggled\" servicePort=\"100.65.250.168:80/TCP\"\nI0622 22:16:01.051522 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:01.092301 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=63\nI0622 22:16:01.099890 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.483255ms\"\nI0622 22:16:01.510839 10 service.go:322] \"Service updated ports\" service=\"conntrack-7541/svc-udp\" portCount=1\nI0622 22:16:02.100081 10 service.go:437] \"Adding new service port\" portName=\"conntrack-7541/svc-udp:udp\" servicePort=\"100.67.145.9:80/UDP\"\nI0622 22:16:02.100168 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:02.171163 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=14 numFilterChains=4 numFilterRules=6 numNATChains=26 numNATRules=63\nI0622 22:16:02.178028 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"77.984117ms\"\nI0622 22:16:04.598362 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:04.607832 10 service.go:322] \"Service updated ports\" service=\"services-1292/svc-tolerate-unready\" portCount=0\nI0622 22:16:04.635569 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=8 numNATChains=26 numNATRules=58\nI0622 22:16:04.641031 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.729426ms\"\nI0622 22:16:04.641102 10 service.go:462] \"Removing service port\" portName=\"services-1292/svc-tolerate-unready:http\"\nI0622 22:16:04.641141 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:04.683611 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=55\nI0622 22:16:04.689431 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.335011ms\"\nI0622 22:16:07.350052 10 service.go:322] \"Service updated ports\" service=\"kubectl-1665/rm2\" portCount=0\nI0622 22:16:07.350110 10 service.go:462] \"Removing service port\" portName=\"kubectl-1665/rm2\"\nI0622 22:16:07.350144 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:07.379007 10 service.go:322] \"Service updated ports\" service=\"kubectl-1665/rm3\" portCount=0\nI0622 22:16:07.390353 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=52\nI0622 22:16:07.396151 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.042194ms\"\nI0622 22:16:07.396190 10 service.go:462] \"Removing service port\" portName=\"kubectl-1665/rm3\"\nI0622 22:16:07.396431 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:07.432484 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=47\nI0622 22:16:07.438204 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.01566ms\"\nI0622 22:16:08.439064 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:08.493748 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=45\nI0622 22:16:08.499561 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.570027ms\"\nI0622 22:16:09.500666 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:09.542602 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=45\nI0622 22:16:09.548116 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.61231ms\"\nI0622 22:16:17.633245 10 service.go:322] \"Service updated ports\" service=\"services-1137/nodeport-range-test\" portCount=1\nI0622 22:16:17.633297 10 service.go:437] \"Adding new service port\" portName=\"services-1137/nodeport-range-test\" servicePort=\"100.64.67.8:80/TCP\"\nI0622 22:16:17.633331 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:17.670007 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=8 numNATChains=19 numNATRules=45\nI0622 22:16:17.675508 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.219168ms\"\nI0622 22:16:17.675564 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:17.720942 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=8 numNATChains=19 numNATRules=45\nI0622 22:16:17.741064 10 service.go:322] \"Service updated ports\" service=\"services-1137/nodeport-range-test\" portCount=0\nI0622 22:16:17.741916 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.372802ms\"\nI0622 22:16:18.742604 10 service.go:462] \"Removing service port\" portName=\"services-1137/nodeport-range-test\"\nI0622 22:16:18.742677 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:18.782192 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=45\nI0622 22:16:18.787592 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.01975ms\"\nI0622 22:16:34.242648 10 service.go:322] \"Service updated ports\" service=\"services-9672/service-headless-toggled\" portCount=0\nI0622 22:16:34.242690 10 service.go:462] \"Removing service port\" portName=\"services-9672/service-headless-toggled\"\nI0622 22:16:34.242723 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:34.296378 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=38\nI0622 22:16:34.301941 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.247761ms\"\nI0622 22:16:34.302043 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:34.344373 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=34\nI0622 22:16:34.367136 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.158982ms\"\nI0622 22:16:38.511461 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:38.570865 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=34\nI0622 22:16:38.580672 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.273172ms\"\nI0622 22:16:39.316874 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-7541/svc-udp:udp\" clusterIP=\"100.67.145.9\"\nI0622 22:16:39.316903 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:39.373389 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 22:16:39.384899 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.139668ms\"\nI0622 22:16:46.049805 10 service.go:322] \"Service updated ports\" service=\"pods-9168/fooservice\" portCount=1\nI0622 22:16:46.049865 10 service.go:437] \"Adding new service port\" portName=\"pods-9168/fooservice\" servicePort=\"100.64.246.28:8765/TCP\"\nI0622 22:16:46.049890 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:46.084175 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=39\nI0622 22:16:46.089279 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.421056ms\"\nI0622 22:16:46.089430 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:46.127511 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=44\nI0622 22:16:46.132926 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.612596ms\"\nI0622 22:16:55.200835 10 service.go:322] \"Service updated ports\" service=\"services-1506/nodeport-test\" portCount=1\nI0622 22:16:55.200890 10 service.go:437] \"Adding new service port\" portName=\"services-1506/nodeport-test:http\" servicePort=\"100.70.181.51:80/TCP\"\nI0622 22:16:55.200916 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:55.247297 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=19 numNATRules=44\nI0622 22:16:55.254948 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.063708ms\"\nI0622 22:16:55.254999 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:55.308662 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=19 numNATRules=44\nI0622 22:16:55.314658 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.676991ms\"\nI0622 22:16:55.444892 10 service.go:322] \"Service updated ports\" service=\"conntrack-7541/svc-udp\" portCount=0\nI0622 22:16:56.315692 10 service.go:462] \"Removing service port\" portName=\"conntrack-7541/svc-udp:udp\"\nI0622 22:16:56.315776 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:56.354304 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=7 numNATChains=19 numNATRules=41\nI0622 22:16:56.364519 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.850448ms\"\nI0622 22:16:59.569716 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:59.606382 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=8 numNATChains=17 numNATRules=36\nI0622 22:16:59.611270 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.602312ms\"\nI0622 22:16:59.621199 10 service.go:322] \"Service updated ports\" service=\"pods-9168/fooservice\" portCount=0\nI0622 22:16:59.621241 10 service.go:462] \"Removing service port\" portName=\"pods-9168/fooservice\"\nI0622 22:16:59.621335 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:16:59.679056 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 22:16:59.685045 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.788827ms\"\nI0622 22:17:00.822018 10 service.go:322] \"Service updated ports\" service=\"services-8061/e2e-svc-a-ktbsx\" portCount=1\nI0622 22:17:00.822082 10 service.go:437] \"Adding new service port\" portName=\"services-8061/e2e-svc-a-ktbsx:http\" servicePort=\"100.71.100.43:8001/TCP\"\nI0622 22:17:00.822109 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:00.860642 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=8 numNATChains=15 numNATRules=34\nI0622 22:17:00.866302 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.233141ms\"\nI0622 22:17:00.868332 10 service.go:322] \"Service updated ports\" service=\"services-8061/e2e-svc-b-kms72\" portCount=1\nI0622 22:17:00.911556 10 service.go:322] \"Service updated ports\" service=\"services-8061/e2e-svc-c-zm5t7\" portCount=1\nI0622 22:17:00.990704 10 service.go:322] \"Service updated ports\" service=\"services-8061/e2e-svc-a-ktbsx\" portCount=0\nI0622 22:17:01.639508 10 service.go:437] \"Adding new service port\" portName=\"services-8061/e2e-svc-c-zm5t7:http\" servicePort=\"100.64.100.163:8003/TCP\"\nI0622 22:17:01.639538 10 service.go:462] \"Removing service port\" portName=\"services-8061/e2e-svc-a-ktbsx:http\"\nI0622 22:17:01.639583 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:01.676476 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=18 numNATRules=42\nI0622 22:17:01.682097 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.611155ms\"\nI0622 22:17:04.416163 10 service.go:322] \"Service updated ports\" service=\"services-3197/hairpin-test\" portCount=1\nI0622 22:17:04.416223 10 service.go:437] \"Adding new service port\" portName=\"services-3197/hairpin-test\" servicePort=\"100.66.208.125:8080/TCP\"\nI0622 22:17:04.416248 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:04.464464 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=7 numNATChains=18 numNATRules=42\nI0622 22:17:04.472547 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.310019ms\"\nI0622 22:17:04.472599 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:04.518753 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=7 numNATChains=18 numNATRules=42\nI0622 22:17:04.524878 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.282002ms\"\nI0622 22:17:05.526042 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:05.573428 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=19 numNATRules=45\nI0622 22:17:05.580769 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.830419ms\"\nI0622 22:17:06.173842 10 service.go:322] \"Service updated ports\" service=\"services-8061/e2e-svc-c-zm5t7\" portCount=0\nI0622 22:17:06.581157 10 service.go:462] \"Removing service port\" portName=\"services-8061/e2e-svc-c-zm5t7:http\"\nI0622 22:17:06.581258 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:06.623589 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=50\nI0622 22:17:06.629817 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.685624ms\"\nI0622 22:17:13.229132 10 service.go:322] \"Service updated ports\" service=\"webhook-2605/e2e-test-webhook\" portCount=1\nI0622 22:17:13.229216 10 service.go:437] \"Adding new service port\" portName=\"webhook-2605/e2e-test-webhook\" servicePort=\"100.70.14.251:8443/TCP\"\nI0622 22:17:13.229265 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:13.266430 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=50\nI0622 22:17:13.272640 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.430828ms\"\nI0622 22:17:13.272932 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:13.314007 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=55\nI0622 22:17:13.319658 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.982304ms\"\nI0622 22:17:13.754722 10 service.go:322] \"Service updated ports\" service=\"services-3197/hairpin-test\" portCount=0\nI0622 22:17:14.320670 10 service.go:462] \"Removing service port\" portName=\"services-3197/hairpin-test\"\nI0622 22:17:14.320746 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:14.361174 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=52\nI0622 22:17:14.366727 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.088796ms\"\nI0622 22:17:14.671674 10 service.go:322] \"Service updated ports\" service=\"webhook-2605/e2e-test-webhook\" portCount=0\nI0622 22:17:15.367765 10 service.go:462] \"Removing service port\" portName=\"webhook-2605/e2e-test-webhook\"\nI0622 22:17:15.367857 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:15.426306 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=47\nI0622 22:17:15.433710 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.984257ms\"\nI0622 22:17:16.977985 10 service.go:322] \"Service updated ports\" service=\"services-2806/svc-not-tolerate-unready\" portCount=0\nI0622 22:17:16.978043 10 service.go:462] \"Removing service port\" portName=\"services-2806/svc-not-tolerate-unready:http\"\nI0622 22:17:16.978078 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:17.058524 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=45\nI0622 22:17:17.069925 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"91.879745ms\"\nI0622 22:17:18.070159 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:18.106447 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=45\nI0622 22:17:18.111049 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.953884ms\"\nI0622 22:17:31.046410 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:31.082533 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=43\nI0622 22:17:31.087416 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.209203ms\"\nI0622 22:17:31.087638 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:31.121476 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=18 numNATRules=37\nI0622 22:17:31.127638 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.191077ms\"\nI0622 22:17:31.176229 10 service.go:322] \"Service updated ports\" service=\"services-1506/nodeport-test\" portCount=0\nI0622 22:17:32.128577 10 service.go:462] \"Removing service port\" portName=\"services-1506/nodeport-test:http\"\nI0622 22:17:32.128666 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:32.178121 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:17:32.184843 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.18063ms\"\nI0622 22:17:32.910528 10 service.go:322] \"Service updated ports\" service=\"conntrack-8655/boom-server\" portCount=1\nI0622 22:17:33.185734 10 service.go:437] \"Adding new service port\" portName=\"conntrack-8655/boom-server\" servicePort=\"100.66.148.45:9000/TCP\"\nI0622 22:17:33.185805 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:33.233956 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:17:33.239719 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.019268ms\"\nI0622 22:17:43.964993 10 service.go:322] \"Service updated ports\" service=\"services-4885/multi-endpoint-test\" portCount=2\nI0622 22:17:43.965055 10 service.go:437] \"Adding new service port\" portName=\"services-4885/multi-endpoint-test:portname1\" servicePort=\"100.66.197.16:80/TCP\"\nI0622 22:17:43.965071 10 service.go:437] \"Adding new service port\" portName=\"services-4885/multi-endpoint-test:portname2\" servicePort=\"100.66.197.16:81/TCP\"\nI0622 22:17:43.965099 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:44.001468 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 22:17:44.006880 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.834657ms\"\nI0622 22:17:44.006928 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:44.046082 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 22:17:44.051311 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.399147ms\"\nI0622 22:17:48.574035 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:17:48.609887 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=44\nI0622 22:17:48.614847 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.878601ms\"\nI0622 22:18:01.279329 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:01.332779 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=49\nI0622 22:18:01.339482 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.222421ms\"\nI0622 22:18:05.113033 10 service.go:322] \"Service updated ports\" service=\"funny-ips-1305/funny-ip\" portCount=1\nI0622 22:18:05.113081 10 service.go:437] \"Adding new service port\" portName=\"funny-ips-1305/funny-ip:http\" servicePort=\"100.66.148.11:7180/TCP\"\nI0622 22:18:05.113109 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:05.159905 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=49\nI0622 22:18:05.166890 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.805913ms\"\nI0622 22:18:05.166951 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:05.211184 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=49\nI0622 22:18:05.217891 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.961775ms\"\nI0622 22:18:07.872055 10 service.go:322] \"Service updated ports\" service=\"services-3074/externalip-test\" portCount=1\nI0622 22:18:07.872108 10 service.go:437] \"Adding new service port\" portName=\"services-3074/externalip-test:http\" servicePort=\"100.68.15.78:80/TCP\"\nI0622 22:18:07.872146 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:07.933232 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=49\nI0622 22:18:07.940277 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.175209ms\"\nI0622 22:18:07.940335 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:08.035060 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=49\nI0622 22:18:08.044296 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"103.980612ms\"\nI0622 22:18:09.075885 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:09.129819 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=54\nI0622 22:18:09.137256 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.422882ms\"\nI0622 22:18:10.138108 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:10.185573 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=62\nI0622 22:18:10.195436 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.396986ms\"\nI0622 22:18:11.687984 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:11.742523 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=59\nI0622 22:18:11.748842 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.918976ms\"\nI0622 22:18:12.695991 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:12.759022 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=57\nI0622 22:18:12.765758 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.832215ms\"\nI0622 22:18:12.974080 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:13.015303 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=54\nI0622 22:18:13.020488 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.461637ms\"\nI0622 22:18:13.112394 10 service.go:322] \"Service updated ports\" service=\"services-4885/multi-endpoint-test\" portCount=0\nI0622 22:18:14.020764 10 service.go:462] \"Removing service port\" portName=\"services-4885/multi-endpoint-test:portname1\"\nI0622 22:18:14.020808 10 service.go:462] \"Removing service port\" portName=\"services-4885/multi-endpoint-test:portname2\"\nI0622 22:18:14.020866 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:14.060276 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=52\nI0622 22:18:14.065706 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.96788ms\"\nI0622 22:18:14.542029 10 service.go:322] \"Service updated ports\" service=\"services-8248/externalname-service\" portCount=1\nI0622 22:18:15.066795 10 service.go:437] \"Adding new service port\" portName=\"services-8248/externalname-service:http\" servicePort=\"100.69.162.246:80/TCP\"\nI0622 22:18:15.066845 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:15.118789 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=52\nI0622 22:18:15.124253 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.49607ms\"\nI0622 22:18:17.871658 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:17.911671 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=57\nI0622 22:18:17.917174 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.566342ms\"\nI0622 22:18:18.238765 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:18.289479 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=60\nI0622 22:18:18.296556 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.847049ms\"\nI0622 22:18:20.900136 10 service.go:322] \"Service updated ports\" service=\"funny-ips-1305/funny-ip\" portCount=0\nI0622 22:18:20.900262 10 service.go:462] \"Removing service port\" portName=\"funny-ips-1305/funny-ip:http\"\nI0622 22:18:20.900327 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:20.936045 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=57\nI0622 22:18:20.941991 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.734201ms\"\nI0622 22:18:20.942061 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:20.978230 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=55\nI0622 22:18:20.983866 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.838312ms\"\nI0622 22:18:25.632097 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:25.674058 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=58\nI0622 22:18:25.679190 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.150574ms\"\nI0622 22:18:28.386907 10 service.go:322] \"Service updated ports\" service=\"webhook-6881/e2e-test-webhook\" portCount=1\nI0622 22:18:28.386988 10 service.go:437] \"Adding new service port\" portName=\"webhook-6881/e2e-test-webhook\" servicePort=\"100.69.148.171:8443/TCP\"\nI0622 22:18:28.387020 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:28.453567 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:18:28.459962 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"72.98705ms\"\nI0622 22:18:28.460041 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:28.527152 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=63\nI0622 22:18:28.532880 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"72.860301ms\"\nI0622 22:18:29.985218 10 service.go:322] \"Service updated ports\" service=\"webhook-6881/e2e-test-webhook\" portCount=0\nI0622 22:18:29.985264 10 service.go:462] \"Removing service port\" portName=\"webhook-6881/e2e-test-webhook\"\nI0622 22:18:29.985294 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:30.021907 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=60\nI0622 22:18:30.028064 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.796536ms\"\nI0622 22:18:30.769659 10 service.go:322] \"Service updated ports\" service=\"services-8248/externalname-service\" portCount=0\nI0622 22:18:30.769707 10 service.go:462] \"Removing service port\" portName=\"services-8248/externalname-service:http\"\nI0622 22:18:30.769757 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:30.833794 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=53\nI0622 22:18:30.842038 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"72.328417ms\"\nI0622 22:18:31.842804 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:31.877815 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 22:18:31.883477 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.747166ms\"\nI0622 22:18:33.889670 10 service.go:322] \"Service updated ports\" service=\"kubectl-7799/agnhost-primary\" portCount=1\nI0622 22:18:33.889738 10 service.go:437] \"Adding new service port\" portName=\"kubectl-7799/agnhost-primary\" servicePort=\"100.68.252.181:6379/TCP\"\nI0622 22:18:33.889767 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:33.928672 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=50\nI0622 22:18:33.934541 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.810975ms\"\nI0622 22:18:33.934622 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:33.971157 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=50\nI0622 22:18:33.977098 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.496535ms\"\nI0622 22:18:39.599826 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:39.620825 10 service.go:322] \"Service updated ports\" service=\"services-3074/externalip-test\" portCount=0\nI0622 22:18:39.646299 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=43\nI0622 22:18:39.651695 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.032445ms\"\nI0622 22:18:39.651729 10 service.go:462] \"Removing service port\" portName=\"services-3074/externalip-test:http\"\nI0622 22:18:39.651752 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:39.700798 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 22:18:39.705712 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.986563ms\"\nI0622 22:18:40.004031 10 service.go:322] \"Service updated ports\" service=\"kubectl-7799/agnhost-primary\" portCount=0\nI0622 22:18:40.706277 10 service.go:462] \"Removing service port\" portName=\"kubectl-7799/agnhost-primary\"\nI0622 22:18:40.706337 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:40.740694 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:18:40.745886 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.668866ms\"\nI0622 22:18:42.349481 10 service.go:322] \"Service updated ports\" service=\"conntrack-8655/boom-server\" portCount=0\nI0622 22:18:42.349536 10 service.go:462] \"Removing service port\" portName=\"conntrack-8655/boom-server\"\nI0622 22:18:42.349566 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:42.385533 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 22:18:42.391440 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.903952ms\"\nI0622 22:18:43.391623 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:43.429769 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:18:43.434981 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.410915ms\"\nI0622 22:18:52.266599 10 service.go:322] \"Service updated ports\" service=\"sctp-192/sctp-clusterip\" portCount=1\nI0622 22:18:52.266727 10 service.go:437] \"Adding new service port\" portName=\"sctp-192/sctp-clusterip\" servicePort=\"100.65.22.213:5060/SCTP\"\nI0622 22:18:52.266786 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:52.301954 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:18:52.311697 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.014171ms\"\nI0622 22:18:52.311746 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:18:52.347313 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:18:52.353976 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.24584ms\"\nI0622 22:19:01.907495 10 service.go:322] \"Service updated ports\" service=\"services-4250/test-service-6ntfw\" portCount=1\nI0622 22:19:01.907718 10 service.go:437] \"Adding new service port\" portName=\"services-4250/test-service-6ntfw:http\" servicePort=\"100.65.163.109:80/TCP\"\nI0622 22:19:01.907759 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:01.944395 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 22:19:01.949628 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.881929ms\"\nI0622 22:19:02.020443 10 service.go:322] \"Service updated ports\" service=\"services-4250/test-service-6ntfw\" portCount=1\nI0622 22:19:02.020669 10 service.go:439] \"Updating existing service port\" portName=\"services-4250/test-service-6ntfw:http\" servicePort=\"100.65.163.109:80/TCP\"\nI0622 22:19:02.020707 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:02.057180 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=34\nI0622 22:19:02.062634 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.146033ms\"\nI0622 22:19:02.203360 10 service.go:322] \"Service updated ports\" service=\"services-4250/test-service-6ntfw\" portCount=1\nI0622 22:19:02.284028 10 service.go:322] \"Service updated ports\" service=\"services-4250/test-service-6ntfw\" portCount=0\nI0622 22:19:03.062789 10 service.go:462] \"Removing service port\" portName=\"services-4250/test-service-6ntfw:http\"\nI0622 22:19:03.062858 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:03.099818 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:19:03.106399 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.641961ms\"\nI0622 22:19:03.647110 10 service.go:322] \"Service updated ports\" service=\"sctp-192/sctp-clusterip\" portCount=0\nI0622 22:19:04.106814 10 service.go:462] \"Removing service port\" portName=\"sctp-192/sctp-clusterip\"\nI0622 22:19:04.106864 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:04.148911 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:19:04.154593 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.776835ms\"\nI0622 22:19:06.665039 10 service.go:322] \"Service updated ports\" service=\"webhook-6349/e2e-test-webhook\" portCount=1\nI0622 22:19:06.665097 10 service.go:437] \"Adding new service port\" portName=\"webhook-6349/e2e-test-webhook\" servicePort=\"100.71.217.61:8443/TCP\"\nI0622 22:19:06.665123 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:06.700303 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:19:06.705335 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.245136ms\"\nI0622 22:19:06.705542 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:06.741340 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:19:06.747122 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.748143ms\"\nI0622 22:19:07.966414 10 service.go:322] \"Service updated ports\" service=\"kubectl-2629/agnhost-primary\" portCount=1\nI0622 22:19:07.966469 10 service.go:437] \"Adding new service port\" portName=\"kubectl-2629/agnhost-primary\" servicePort=\"100.67.84.245:6379/TCP\"\nI0622 22:19:07.966505 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:08.002720 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 22:19:08.008688 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.228654ms\"\nI0622 22:19:08.136248 10 service.go:322] \"Service updated ports\" service=\"webhook-6349/e2e-test-webhook\" portCount=0\nI0622 22:19:09.009754 10 service.go:462] \"Removing service port\" portName=\"webhook-6349/e2e-test-webhook\"\nI0622 22:19:09.009839 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:09.043991 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=36\nI0622 22:19:09.048916 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.182947ms\"\nI0622 22:19:10.559705 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:10.593741 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:19:10.598645 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.993983ms\"\nI0622 22:19:17.830161 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:17.876909 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=36\nI0622 22:19:17.882112 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.009627ms\"\nI0622 22:19:17.957528 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:18.003129 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:19:18.008805 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.327977ms\"\nI0622 22:19:18.125472 10 service.go:322] \"Service updated ports\" service=\"kubectl-2629/agnhost-primary\" portCount=0\nI0622 22:19:19.009688 10 service.go:462] \"Removing service port\" portName=\"kubectl-2629/agnhost-primary\"\nI0622 22:19:19.009742 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:19.122877 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:19:19.129860 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"120.208688ms\"\nI0622 22:19:26.288352 10 service.go:322] \"Service updated ports\" service=\"webhook-7646/e2e-test-webhook\" portCount=1\nI0622 22:19:26.288430 10 service.go:437] \"Adding new service port\" portName=\"webhook-7646/e2e-test-webhook\" servicePort=\"100.65.209.249:8443/TCP\"\nI0622 22:19:26.288462 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:26.324580 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:19:26.329662 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.236112ms\"\nI0622 22:19:26.329804 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:26.370099 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:19:26.375002 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.301836ms\"\nI0622 22:19:32.049718 10 service.go:322] \"Service updated ports\" service=\"webhook-7646/e2e-test-webhook\" portCount=0\nI0622 22:19:32.049761 10 service.go:462] \"Removing service port\" portName=\"webhook-7646/e2e-test-webhook\"\nI0622 22:19:32.049795 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:32.084997 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 22:19:32.090256 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.496369ms\"\nI0622 22:19:32.090479 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:19:32.126022 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:19:32.130591 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.29927ms\"\nI0622 22:20:11.236164 10 service.go:322] \"Service updated ports\" service=\"webhook-1103/e2e-test-webhook\" portCount=1\nI0622 22:20:11.236229 10 service.go:437] \"Adding new service port\" portName=\"webhook-1103/e2e-test-webhook\" servicePort=\"100.66.45.65:8443/TCP\"\nI0622 22:20:11.236260 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:11.276856 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:20:11.282305 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.082401ms\"\nI0622 22:20:11.282384 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:11.316764 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:20:11.322503 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.156879ms\"\nI0622 22:20:13.217403 10 service.go:322] \"Service updated ports\" service=\"webhook-1103/e2e-test-webhook\" portCount=0\nI0622 22:20:13.217455 10 service.go:462] \"Removing service port\" portName=\"webhook-1103/e2e-test-webhook\"\nI0622 22:20:13.217487 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:13.254524 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 22:20:13.259343 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.88877ms\"\nI0622 22:20:13.259631 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:13.293507 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:20:13.297904 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.526336ms\"\nI0622 22:20:22.821736 10 service.go:322] \"Service updated ports\" service=\"proxy-4458/proxy-service-h2bww\" portCount=4\nI0622 22:20:22.821797 10 service.go:437] \"Adding new service port\" portName=\"proxy-4458/proxy-service-h2bww:portname1\" servicePort=\"100.71.176.193:80/TCP\"\nI0622 22:20:22.821842 10 service.go:437] \"Adding new service port\" portName=\"proxy-4458/proxy-service-h2bww:portname2\" servicePort=\"100.71.176.193:81/TCP\"\nI0622 22:20:22.821858 10 service.go:437] \"Adding new service port\" portName=\"proxy-4458/proxy-service-h2bww:tlsportname1\" servicePort=\"100.71.176.193:443/TCP\"\nI0622 22:20:22.821872 10 service.go:437] \"Adding new service port\" portName=\"proxy-4458/proxy-service-h2bww:tlsportname2\" servicePort=\"100.71.176.193:444/TCP\"\nI0622 22:20:22.821902 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:22.856750 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 22:20:22.861817 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.027682ms\"\nI0622 22:20:22.861871 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:22.901905 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 22:20:22.908040 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.185254ms\"\nI0622 22:20:23.702758 10 service.go:322] \"Service updated ports\" service=\"webhook-8946/e2e-test-webhook\" portCount=1\nI0622 22:20:23.908895 10 service.go:437] \"Adding new service port\" portName=\"webhook-8946/e2e-test-webhook\" servicePort=\"100.68.246.134:8443/TCP\"\nI0622 22:20:23.908979 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:23.946928 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=8 numFilterChains=4 numFilterRules=7 numNATChains=17 numNATRules=39\nI0622 22:20:23.952660 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.815874ms\"\nI0622 22:20:25.283260 10 service.go:322] \"Service updated ports\" service=\"webhook-8946/e2e-test-webhook\" portCount=0\nI0622 22:20:25.283310 10 service.go:462] \"Removing service port\" portName=\"webhook-8946/e2e-test-webhook\"\nI0622 22:20:25.283340 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:25.317439 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=17 numNATRules=36\nI0622 22:20:25.322582 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.272429ms\"\nI0622 22:20:26.306480 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:26.353810 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 22:20:26.360152 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.793557ms\"\nI0622 22:20:26.498833 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-1\" portCount=1\nI0622 22:20:27.081015 10 service.go:437] \"Adding new service port\" portName=\"services-2691/up-down-1\" servicePort=\"100.69.65.39:80/TCP\"\nI0622 22:20:27.081154 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:27.115937 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=54\nI0622 22:20:27.121462 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.470735ms\"\nI0622 22:20:28.122018 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:28.181720 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=59\nI0622 22:20:28.191441 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.49664ms\"\nI0622 22:20:29.207922 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:29.268069 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=7 numNATChains=25 numNATRules=47\nI0622 22:20:29.273768 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.075145ms\"\nI0622 22:20:30.490563 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:30.528061 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=13 numFilterChains=4 numFilterRules=7 numNATChains=18 numNATRules=42\nI0622 22:20:30.533130 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.632687ms\"\nI0622 22:20:33.099847 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:33.145281 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=13 numFilterChains=4 numFilterRules=7 numNATChains=18 numNATRules=42\nI0622 22:20:33.158351 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.612897ms\"\nI0622 22:20:33.279975 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:33.326027 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=9 numFilterChains=4 numFilterRules=7 numNATChains=18 numNATRules=42\nI0622 22:20:33.333121 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.212537ms\"\nI0622 22:20:34.680672 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:34.725950 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=19 numNATRules=45\nI0622 22:20:34.733277 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.673288ms\"\nI0622 22:20:35.680222 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-2\" portCount=1\nI0622 22:20:35.680279 10 service.go:437] \"Adding new service port\" portName=\"services-2691/up-down-2\" servicePort=\"100.64.247.246:80/TCP\"\nI0622 22:20:35.680313 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:35.743570 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=10 numFilterChains=4 numFilterRules=8 numNATChains=19 numNATRules=45\nI0622 22:20:35.750053 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.777381ms\"\nI0622 22:20:36.750833 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:36.823915 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=10 numFilterChains=4 numFilterRules=8 numNATChains=19 numNATRules=45\nI0622 22:20:36.831410 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"80.64179ms\"\nI0622 22:20:38.626449 10 service.go:322] \"Service updated ports\" service=\"proxy-4458/proxy-service-h2bww\" portCount=0\nI0622 22:20:38.626509 10 service.go:462] \"Removing service port\" portName=\"proxy-4458/proxy-service-h2bww:portname1\"\nI0622 22:20:38.626521 10 service.go:462] \"Removing service port\" portName=\"proxy-4458/proxy-service-h2bww:portname2\"\nI0622 22:20:38.626531 10 service.go:462] \"Removing service port\" portName=\"proxy-4458/proxy-service-h2bww:tlsportname1\"\nI0622 22:20:38.626540 10 service.go:462] \"Removing service port\" portName=\"proxy-4458/proxy-service-h2bww:tlsportname2\"\nI0622 22:20:38.626611 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:38.661155 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 22:20:38.667741 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.232615ms\"\nI0622 22:20:38.686758 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:38.725850 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 22:20:38.731015 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.278411ms\"\nI0622 22:20:39.732074 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:39.766438 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 22:20:39.771384 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.391706ms\"\nI0622 22:20:40.771898 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:40.821202 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=53\nI0622 22:20:40.826145 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.379437ms\"\nI0622 22:20:44.895816 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:44.938627 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=56\nI0622 22:20:44.943715 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.972836ms\"\nI0622 22:20:51.388341 10 service.go:322] \"Service updated ports\" service=\"webhook-2249/e2e-test-webhook\" portCount=1\nI0622 22:20:51.388509 10 service.go:437] \"Adding new service port\" portName=\"webhook-2249/e2e-test-webhook\" servicePort=\"100.71.212.228:8443/TCP\"\nI0622 22:20:51.388555 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:51.423429 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 22:20:51.428620 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.224216ms\"\nI0622 22:20:51.428857 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:51.467356 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=61\nI0622 22:20:51.473338 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.681159ms\"\nI0622 22:20:51.723134 10 service.go:322] \"Service updated ports\" service=\"services-5697/nodeport-service\" portCount=1\nI0622 22:20:51.760950 10 service.go:322] \"Service updated ports\" service=\"services-5697/externalsvc\" portCount=1\nI0622 22:20:52.473770 10 service.go:437] \"Adding new service port\" portName=\"services-5697/nodeport-service\" servicePort=\"100.70.77.177:80/TCP\"\nI0622 22:20:52.473805 10 service.go:437] \"Adding new service port\" portName=\"services-5697/externalsvc\" servicePort=\"100.71.106.98:80/TCP\"\nI0622 22:20:52.473859 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:52.513133 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=14 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=61\nI0622 22:20:52.520062 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.325621ms\"\nI0622 22:20:52.760907 10 service.go:322] \"Service updated ports\" service=\"webhook-2249/e2e-test-webhook\" portCount=0\nI0622 22:20:53.521095 10 service.go:462] \"Removing service port\" portName=\"webhook-2249/e2e-test-webhook\"\nI0622 22:20:53.521185 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:53.574540 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=58\nI0622 22:20:53.586794 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.71912ms\"\nI0622 22:20:54.587568 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:54.645231 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=61\nI0622 22:20:54.651548 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.056505ms\"\nI0622 22:20:58.490461 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:20:58.525180 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=64\nI0622 22:20:58.531207 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.822219ms\"\nI0622 22:21:00.629971 10 service.go:322] \"Service updated ports\" service=\"conntrack-8214/svc-udp\" portCount=1\nI0622 22:21:00.630032 10 service.go:437] \"Adding new service port\" portName=\"conntrack-8214/svc-udp:udp\" servicePort=\"100.70.207.139:80/UDP\"\nI0622 22:21:00.630073 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:00.667908 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=7 numNATChains=26 numNATRules=64\nI0622 22:21:00.676535 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.510383ms\"\nI0622 22:21:00.676607 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:00.711043 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=7 numNATChains=26 numNATRules=64\nI0622 22:21:00.716843 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.265642ms\"\nI0622 22:21:00.983033 10 service.go:322] \"Service updated ports\" service=\"services-5697/nodeport-service\" portCount=0\nI0622 22:21:01.717361 10 service.go:462] \"Removing service port\" portName=\"services-5697/nodeport-service\"\nI0622 22:21:01.717433 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:01.753934 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=64\nI0622 22:21:01.759650 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.325768ms\"\nI0622 22:21:03.399118 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:03.442026 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=62\nI0622 22:21:03.449678 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.641041ms\"\nI0622 22:21:04.450537 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:04.508220 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=56\nI0622 22:21:04.517313 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"67.012116ms\"\nI0622 22:21:05.280398 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:05.315655 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=6 numNATChains=22 numNATRules=53\nI0622 22:21:05.321647 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.326956ms\"\nI0622 22:21:05.729242 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:05.766411 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=6 numNATChains=22 numNATRules=53\nI0622 22:21:05.772561 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.38566ms\"\nI0622 22:21:06.773794 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:06.811291 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=22 numNATRules=53\nI0622 22:21:06.817076 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.372795ms\"\nI0622 22:21:07.711113 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:07.751225 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=22 numNATRules=53\nI0622 22:21:07.756664 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.610335ms\"\nI0622 22:21:07.803952 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-1\" portCount=0\nI0622 22:21:08.757558 10 service.go:462] \"Removing service port\" portName=\"services-2691/up-down-1\"\nI0622 22:21:08.757749 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-8214/svc-udp:udp\" clusterIP=\"100.70.207.139\"\nI0622 22:21:08.757831 10 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-8214/svc-udp:udp\" nodePort=31112\nI0622 22:21:08.757848 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:08.793324 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=61\nI0622 22:21:08.806781 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.243885ms\"\nI0622 22:21:09.764294 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:09.800608 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=59\nI0622 22:21:09.805859 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.629408ms\"\nI0622 22:21:10.806086 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:10.868701 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=55\nI0622 22:21:10.879810 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"73.88563ms\"\nI0622 22:21:11.762966 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:11.800058 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 22:21:11.805507 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.611242ms\"\nI0622 22:21:12.805761 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:12.853300 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 22:21:12.858785 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.125653ms\"\nI0622 22:21:14.981160 10 service.go:322] \"Service updated ports\" service=\"apply-4603/test-svc\" portCount=1\nI0622 22:21:14.981216 10 service.go:437] \"Adding new service port\" portName=\"apply-4603/test-svc\" servicePort=\"100.65.40.214:8080/UDP\"\nI0622 22:21:14.981526 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:15.018182 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 22:21:15.024291 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.082093ms\"\nI0622 22:21:15.326985 10 service.go:322] \"Service updated ports\" service=\"webhook-4223/e2e-test-webhook\" portCount=1\nI0622 22:21:15.327045 10 service.go:437] \"Adding new service port\" portName=\"webhook-4223/e2e-test-webhook\" servicePort=\"100.67.124.14:8443/TCP\"\nI0622 22:21:15.327101 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:15.373876 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=22 numNATRules=53\nI0622 22:21:15.386364 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.324013ms\"\nI0622 22:21:15.950766 10 service.go:322] \"Service updated ports\" service=\"services-5697/externalsvc\" portCount=0\nI0622 22:21:15.999416 10 service.go:462] \"Removing service port\" portName=\"services-5697/externalsvc\"\nI0622 22:21:15.999519 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:16.035154 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:21:16.040765 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.35926ms\"\nI0622 22:21:17.040975 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:17.080815 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:21:17.104776 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.849401ms\"\nI0622 22:21:19.734958 10 service.go:322] \"Service updated ports\" service=\"webhook-4223/e2e-test-webhook\" portCount=0\nI0622 22:21:19.734999 10 service.go:462] \"Removing service port\" portName=\"webhook-4223/e2e-test-webhook\"\nI0622 22:21:19.735051 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:19.798005 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=55\nI0622 22:21:19.805695 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.692799ms\"\nI0622 22:21:19.805781 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:19.875917 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 22:21:19.897726 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"91.992264ms\"\nI0622 22:21:20.210990 10 service.go:322] \"Service updated ports\" service=\"apply-4603/test-svc\" portCount=0\nI0622 22:21:20.898115 10 service.go:462] \"Removing service port\" portName=\"apply-4603/test-svc\"\nI0622 22:21:20.898236 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:20.936479 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=53\nI0622 22:21:20.945647 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.580605ms\"\nI0622 22:21:21.938766 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:21.977864 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=56\nI0622 22:21:21.982952 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.246773ms\"\nI0622 22:21:22.962196 10 service.go:322] \"Service updated ports\" service=\"services-8737/service-proxy-toggled\" portCount=1\nI0622 22:21:22.962257 10 service.go:437] \"Adding new service port\" portName=\"services-8737/service-proxy-toggled\" servicePort=\"100.66.0.218:80/TCP\"\nI0622 22:21:22.962301 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:23.002497 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 22:21:23.009540 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.29218ms\"\nI0622 22:21:23.357946 10 service.go:322] \"Service updated ports\" service=\"services-3636/nodeport-collision-1\" portCount=1\nI0622 22:21:23.523750 10 service.go:322] \"Service updated ports\" service=\"services-3636/nodeport-collision-2\" portCount=1\nI0622 22:21:23.825003 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:23.876677 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=59\nI0622 22:21:23.895082 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.147267ms\"\nI0622 22:21:24.215018 10 service.go:322] \"Service updated ports\" service=\"dns-4380/test-service-2\" portCount=1\nI0622 22:21:24.895782 10 service.go:437] \"Adding new service port\" portName=\"dns-4380/test-service-2:http\" servicePort=\"100.65.217.220:80/TCP\"\nI0622 22:21:24.895855 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:24.936406 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:21:24.943909 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.172385ms\"\nI0622 22:21:25.944152 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:25.989483 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=61\nI0622 22:21:25.997841 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.783297ms\"\nI0622 22:21:27.596892 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:27.632743 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=64\nI0622 22:21:27.639489 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.775992ms\"\nI0622 22:21:33.899822 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:33.937148 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=69\nI0622 22:21:33.942767 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.00242ms\"\nI0622 22:21:34.049050 10 service.go:322] \"Service updated ports\" service=\"webhook-6550/e2e-test-webhook\" portCount=1\nI0622 22:21:34.049100 10 service.go:437] \"Adding new service port\" portName=\"webhook-6550/e2e-test-webhook\" servicePort=\"100.65.107.216:8443/TCP\"\nI0622 22:21:34.049141 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:34.135429 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=28 numNATRules=69\nI0622 22:21:34.141716 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"92.61937ms\"\nI0622 22:21:35.141966 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:35.180454 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=16 numFilterChains=4 numFilterRules=3 numNATChains=30 numNATRules=74\nI0622 22:21:35.186091 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.220978ms\"\nI0622 22:21:36.308727 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-3\" portCount=1\nI0622 22:21:36.308788 10 service.go:437] \"Adding new service port\" portName=\"services-2691/up-down-3\" servicePort=\"100.66.189.159:80/TCP\"\nI0622 22:21:36.308846 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:36.345100 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=74\nI0622 22:21:36.350824 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.042275ms\"\nI0622 22:21:37.351089 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:37.387721 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=74\nI0622 22:21:37.394820 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.878356ms\"\nI0622 22:21:37.890693 10 service.go:322] \"Service updated ports\" service=\"webhook-6550/e2e-test-webhook\" portCount=0\nI0622 22:21:37.939642 10 service.go:462] \"Removing service port\" portName=\"webhook-6550/e2e-test-webhook\"\nI0622 22:21:37.939930 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:37.980881 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=71\nI0622 22:21:37.986625 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.99438ms\"\nI0622 22:21:38.659158 10 service.go:322] \"Service updated ports\" service=\"resourcequota-1890/test-service\" portCount=1\nI0622 22:21:38.710113 10 service.go:322] \"Service updated ports\" service=\"resourcequota-1890/test-service-np\" portCount=1\nI0622 22:21:38.926681 10 service.go:322] \"Service updated ports\" service=\"conntrack-8214/svc-udp\" portCount=0\nI0622 22:21:38.926740 10 service.go:437] \"Adding new service port\" portName=\"resourcequota-1890/test-service\" servicePort=\"100.64.177.183:80/TCP\"\nI0622 22:21:38.926756 10 service.go:437] \"Adding new service port\" portName=\"resourcequota-1890/test-service-np\" servicePort=\"100.67.152.30:80/TCP\"\nI0622 22:21:38.926767 10 service.go:462] \"Removing service port\" portName=\"conntrack-8214/svc-udp:udp\"\nI0622 22:21:38.926963 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:38.970955 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=15 numFilterChains=4 numFilterRules=6 numNATChains=30 numNATRules=69\nI0622 22:21:38.983102 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.366165ms\"\nI0622 22:21:39.983335 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:40.022016 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=15 numFilterChains=4 numFilterRules=6 numNATChains=27 numNATRules=66\nI0622 22:21:40.028614 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.369509ms\"\nI0622 22:21:40.859983 10 service.go:322] \"Service updated ports\" service=\"resourcequota-1890/test-service\" portCount=0\nI0622 22:21:40.921985 10 service.go:322] \"Service updated ports\" service=\"resourcequota-1890/test-service-np\" portCount=0\nI0622 22:21:40.922098 10 service.go:462] \"Removing service port\" portName=\"resourcequota-1890/test-service\"\nI0622 22:21:40.922119 10 service.go:462] \"Removing service port\" portName=\"resourcequota-1890/test-service-np\"\nI0622 22:21:40.922393 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:40.959377 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=69\nI0622 22:21:40.964724 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.63932ms\"\nI0622 22:21:43.193952 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:43.241827 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=17 numFilterChains=4 numFilterRules=3 numNATChains=29 numNATRules=72\nI0622 22:21:43.256765 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.890334ms\"\nI0622 22:21:51.519057 10 service.go:322] \"Service updated ports\" service=\"services-8737/service-proxy-toggled\" portCount=0\nI0622 22:21:51.519105 10 service.go:462] \"Removing service port\" portName=\"services-8737/service-proxy-toggled\"\nI0622 22:21:51.519155 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:51.568447 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=29 numNATRules=65\nI0622 22:21:51.579604 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.498075ms\"\nI0622 22:21:51.579707 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:51.643659 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=61\nI0622 22:21:51.660283 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"80.641263ms\"\nI0622 22:21:56.212229 10 service.go:322] \"Service updated ports\" service=\"services-8737/service-proxy-toggled\" portCount=1\nI0622 22:21:56.212289 10 service.go:437] \"Adding new service port\" portName=\"services-8737/service-proxy-toggled\" servicePort=\"100.66.0.218:80/TCP\"\nI0622 22:21:56.212349 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:56.278418 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=61\nI0622 22:21:56.289566 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"77.281312ms\"\nI0622 22:21:56.289842 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:21:56.334460 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=17 numFilterChains=4 numFilterRules=3 numNATChains=29 numNATRules=72\nI0622 22:21:56.341438 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.663623ms\"\nI0622 22:22:00.685366 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:00.715898 10 service.go:322] \"Service updated ports\" service=\"dns-4380/test-service-2\" portCount=0\nI0622 22:22:00.719690 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=69\nI0622 22:22:00.725101 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.798814ms\"\nI0622 22:22:00.725136 10 service.go:462] \"Removing service port\" portName=\"dns-4380/test-service-2:http\"\nI0622 22:22:00.725173 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:00.758556 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=16 numFilterChains=4 numFilterRules=3 numNATChains=27 numNATRules=67\nI0622 22:22:00.763761 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.62635ms\"\nI0622 22:22:01.764011 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:01.817932 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=16 numFilterChains=4 numFilterRules=3 numNATChains=27 numNATRules=67\nI0622 22:22:01.825319 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.390559ms\"\nI0622 22:22:03.947909 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:03.993372 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=27 numNATRules=58\nI0622 22:22:03.998973 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.178098ms\"\nI0622 22:22:03.999075 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:04.033333 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=16 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=48\nI0622 22:22:04.038647 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.642907ms\"\nI0622 22:22:05.039342 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:05.111228 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=45\nI0622 22:22:05.122289 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"83.195816ms\"\nI0622 22:22:05.136026 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-2\" portCount=0\nI0622 22:22:05.152711 10 service.go:322] \"Service updated ports\" service=\"services-2691/up-down-3\" portCount=0\nI0622 22:22:06.122494 10 service.go:462] \"Removing service port\" portName=\"services-2691/up-down-2\"\nI0622 22:22:06.122528 10 service.go:462] \"Removing service port\" portName=\"services-2691/up-down-3\"\nI0622 22:22:06.122570 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:06.158728 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=45\nI0622 22:22:06.163791 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.329184ms\"\nI0622 22:22:26.059593 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:26.095317 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=43\nI0622 22:22:26.100117 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.5962ms\"\nI0622 22:22:26.100375 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:26.133297 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=18 numNATRules=37\nI0622 22:22:26.138112 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.962022ms\"\nI0622 22:22:26.180942 10 service.go:322] \"Service updated ports\" service=\"services-8737/service-proxy-toggled\" portCount=0\nI0622 22:22:27.138435 10 service.go:462] \"Removing service port\" portName=\"services-8737/service-proxy-toggled\"\nI0622 22:22:27.138539 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:27.195288 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 22:22:27.205194 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.800555ms\"\nI0622 22:22:31.103605 10 service.go:322] \"Service updated ports\" service=\"conntrack-1762/svc-udp\" portCount=1\nI0622 22:22:31.103805 10 service.go:437] \"Adding new service port\" portName=\"conntrack-1762/svc-udp:udp\" servicePort=\"100.66.89.30:80/UDP\"\nI0622 22:22:31.103841 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:31.148771 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:22:31.155859 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.08153ms\"\nI0622 22:22:31.155920 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:31.209144 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 22:22:31.216383 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.48504ms\"\nI0622 22:22:40.701974 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-1762/svc-udp:udp\" clusterIP=\"100.66.89.30\"\nI0622 22:22:40.702003 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:40.742467 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 22:22:40.759272 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.429149ms\"\nI0622 22:22:46.200636 10 service.go:322] \"Service updated ports\" service=\"services-6764/nodeport-update-service\" portCount=1\nI0622 22:22:46.200699 10 service.go:437] \"Adding new service port\" portName=\"services-6764/nodeport-update-service\" servicePort=\"100.66.51.116:80/TCP\"\nI0622 22:22:46.200732 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:46.236775 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 22:22:46.243213 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.522103ms\"\nI0622 22:22:46.243281 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:46.276589 10 service.go:322] \"Service updated ports\" service=\"services-6764/nodeport-update-service\" portCount=1\nI0622 22:22:46.281355 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 22:22:46.286808 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.560355ms\"\nI0622 22:22:47.287034 10 service.go:437] \"Adding new service port\" portName=\"services-6764/nodeport-update-service:tcp-port\" servicePort=\"100.66.51.116:80/TCP\"\nI0622 22:22:47.287085 10 service.go:462] \"Removing service port\" portName=\"services-6764/nodeport-update-service\"\nI0622 22:22:47.287124 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:47.330705 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 22:22:47.336401 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.423869ms\"\nI0622 22:22:47.554792 10 service.go:322] \"Service updated ports\" service=\"webhook-6812/e2e-test-webhook\" portCount=1\nI0622 22:22:48.337437 10 service.go:437] \"Adding new service port\" portName=\"webhook-6812/e2e-test-webhook\" servicePort=\"100.64.215.29:8443/TCP\"\nI0622 22:22:48.337537 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:48.372297 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=44\nI0622 22:22:48.378938 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.557017ms\"\nI0622 22:22:48.933866 10 service.go:322] \"Service updated ports\" service=\"webhook-6812/e2e-test-webhook\" portCount=0\nI0622 22:22:49.379122 10 service.go:462] \"Removing service port\" portName=\"webhook-6812/e2e-test-webhook\"\nI0622 22:22:49.379215 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:49.425832 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=49\nI0622 22:22:49.432904 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.80506ms\"\nI0622 22:22:53.012172 10 service.go:322] \"Service updated ports\" service=\"endpointslicemirroring-2582/example-custom-endpoints\" portCount=1\nI0622 22:22:53.012242 10 service.go:437] \"Adding new service port\" portName=\"endpointslicemirroring-2582/example-custom-endpoints:example\" servicePort=\"100.67.187.138:80/TCP\"\nI0622 22:22:53.012311 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:53.053387 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 22:22:53.059331 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.097907ms\"\nI0622 22:22:53.083762 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:53.138363 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 22:22:53.144106 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.381671ms\"\nI0622 22:22:54.144342 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:54.182357 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 22:22:54.187919 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.655135ms\"\nI0622 22:22:55.186540 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:55.228956 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=50\nI0622 22:22:55.235527 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.054783ms\"\nI0622 22:22:58.484632 10 service.go:322] \"Service updated ports\" service=\"endpointslicemirroring-2582/example-custom-endpoints\" portCount=0\nI0622 22:22:58.484685 10 service.go:462] \"Removing service port\" portName=\"endpointslicemirroring-2582/example-custom-endpoints:example\"\nI0622 22:22:58.484732 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:58.527514 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 22:22:58.535854 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.020597ms\"\nI0622 22:22:58.600433 10 service.go:322] \"Service updated ports\" service=\"services-1344/clusterip-service\" portCount=1\nI0622 22:22:58.600483 10 service.go:437] \"Adding new service port\" portName=\"services-1344/clusterip-service\" servicePort=\"100.67.161.58:80/TCP\"\nI0622 22:22:58.600527 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:58.639214 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=50\nI0622 22:22:58.643751 10 service.go:322] \"Service updated ports\" service=\"services-1344/externalsvc\" portCount=1\nI0622 22:22:58.646143 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.667691ms\"\nI0622 22:22:59.618178 10 service.go:437] \"Adding new service port\" portName=\"services-1344/externalsvc\" servicePort=\"100.64.245.54:80/TCP\"\nI0622 22:22:59.618287 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:22:59.663327 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:22:59.669423 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.264221ms\"\nI0622 22:23:00.670699 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:00.706380 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=56\nI0622 22:23:00.727521 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.014487ms\"\nI0622 22:23:04.803368 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:04.840429 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 22:23:04.847483 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.178867ms\"\nI0622 22:23:07.862088 10 service.go:322] \"Service updated ports\" service=\"services-1344/clusterip-service\" portCount=0\nI0622 22:23:07.862140 10 service.go:462] \"Removing service port\" portName=\"services-1344/clusterip-service\"\nI0622 22:23:07.862183 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:07.896007 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=58\nI0622 22:23:07.902263 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.122808ms\"\nI0622 22:23:10.814328 10 service.go:322] \"Service updated ports\" service=\"services-6764/nodeport-update-service\" portCount=2\nI0622 22:23:10.814389 10 service.go:439] \"Updating existing service port\" portName=\"services-6764/nodeport-update-service:tcp-port\" servicePort=\"100.66.51.116:80/TCP\"\nI0622 22:23:10.814405 10 service.go:437] \"Adding new service port\" portName=\"services-6764/nodeport-update-service:udp-port\" servicePort=\"100.66.51.116:80/UDP\"\nI0622 22:23:10.814447 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:10.886572 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=58\nI0622 22:23:10.913349 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"98.965268ms\"\nI0622 22:23:10.913540 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"services-6764/nodeport-update-service:udp-port\" clusterIP=\"100.66.51.116\"\nI0622 22:23:10.913714 10 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"services-6764/nodeport-update-service:udp-port\" nodePort=31929\nI0622 22:23:10.913732 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:10.988448 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=69\nI0622 22:23:11.013232 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"99.843292ms\"\nI0622 22:23:15.338111 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:15.374175 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=28 numNATRules=66\nI0622 22:23:15.386919 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.876145ms\"\nI0622 22:23:15.387220 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:15.420079 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=64\nI0622 22:23:15.425630 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.674538ms\"\nI0622 22:23:15.587522 10 service.go:322] \"Service updated ports\" service=\"conntrack-1762/svc-udp\" portCount=0\nI0622 22:23:16.426648 10 service.go:462] \"Removing service port\" portName=\"conntrack-1762/svc-udp:udp\"\nI0622 22:23:16.426740 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:16.462706 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=64\nI0622 22:23:16.473891 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.269016ms\"\nI0622 22:23:17.474804 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:17.509459 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=59\nI0622 22:23:17.515318 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.617867ms\"\nI0622 22:23:18.515776 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:18.552119 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 22:23:18.557196 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.523808ms\"\nI0622 22:23:19.486133 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:19.522271 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 22:23:19.529545 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.463797ms\"\nI0622 22:23:19.722162 10 service.go:322] \"Service updated ports\" service=\"services-1344/externalsvc\" portCount=0\nI0622 22:23:20.529721 10 service.go:462] \"Removing service port\" portName=\"services-1344/externalsvc\"\nI0622 22:23:20.529823 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:20.591231 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=56\nI0622 22:23:20.598093 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.404741ms\"\nI0622 22:23:21.773617 10 service.go:322] \"Service updated ports\" service=\"webhook-8046/e2e-test-webhook\" portCount=1\nI0622 22:23:21.773672 10 service.go:437] \"Adding new service port\" portName=\"webhook-8046/e2e-test-webhook\" servicePort=\"100.68.65.52:8443/TCP\"\nI0622 22:23:21.773712 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:21.812571 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 22:23:21.819106 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.441564ms\"\nI0622 22:23:22.820277 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:22.857420 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=61\nI0622 22:23:22.864447 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.259132ms\"\nI0622 22:23:23.375657 10 service.go:322] \"Service updated ports\" service=\"webhook-2643/e2e-test-webhook\" portCount=1\nI0622 22:23:23.375711 10 service.go:437] \"Adding new service port\" portName=\"webhook-2643/e2e-test-webhook\" servicePort=\"100.70.239.243:8443/TCP\"\nI0622 22:23:23.375753 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:23.449023 10 service.go:322] \"Service updated ports\" service=\"webhook-8046/e2e-test-webhook\" portCount=0\nI0622 22:23:23.453563 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=61\nI0622 22:23:23.461064 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"85.361062ms\"\nI0622 22:23:24.461424 10 service.go:462] \"Removing service port\" portName=\"webhook-8046/e2e-test-webhook\"\nI0622 22:23:24.461554 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:24.497891 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=27 numNATRules=63\nI0622 22:23:24.505461 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.089828ms\"\nI0622 22:23:37.314423 10 service.go:322] \"Service updated ports\" service=\"webhook-2643/e2e-test-webhook\" portCount=0\nI0622 22:23:37.314497 10 service.go:462] \"Removing service port\" portName=\"webhook-2643/e2e-test-webhook\"\nI0622 22:23:37.314558 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:37.350488 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=58\nI0622 22:23:37.355833 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.360878ms\"\nI0622 22:23:37.356010 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:37.392229 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=56\nI0622 22:23:37.397830 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.961147ms\"\nI0622 22:23:49.292159 10 service.go:322] \"Service updated ports\" service=\"services-6764/nodeport-update-service\" portCount=0\nI0622 22:23:49.292230 10 service.go:462] \"Removing service port\" portName=\"services-6764/nodeport-update-service:tcp-port\"\nI0622 22:23:49.292288 10 service.go:462] \"Removing service port\" portName=\"services-6764/nodeport-update-service:udp-port\"\nI0622 22:23:49.292350 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 22:23:49.366218 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=42\nI0622 22:23:49.383375 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"91.146123ms\"\nI0622 22:23:49.383627 10 proxier.go:853] \"Syncing iptables rules\"\nI0622