Error lines from build-log.txt
... skipping 183 lines ...
Updating project ssh metadata...
.....................................................Updated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew].
.done.
WARNING: No host aliases were added to your SSH configs because you do not have any running instances. Try running this command again after running some instances.
I0622 16:05:33.490198 5900 up.go:44] Cleaning up any leaked resources from previous cluster
I0622 16:05:33.490329 5900 dumplogs.go:45] /logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kops toolbox dump --name e2e-e2e-kops-gce-stable.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh854688831/key --ssh-user prow
W0622 16:05:33.656301 5900 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0622 16:05:33.656347 5900 down.go:48] /logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kops delete cluster --name e2e-e2e-kops-gce-stable.k8s.local --yes
I0622 16:05:33.677422 5949 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0622 16:05:33.677607 5949 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-gce-stable.k8s.local" not found
I0622 16:05:33.791806 5900 gcs.go:51] gsutil ls -b -p gce-gci-upg-1-3-lat-ctl-skew gs://gce-gci-upg-1-3-lat-ctl-skew-state-e3
I0622 16:05:35.464559 5900 gcs.go:70] gsutil mb -p gce-gci-upg-1-3-lat-ctl-skew gs://gce-gci-upg-1-3-lat-ctl-skew-state-e3
Creating gs://gce-gci-upg-1-3-lat-ctl-skew-state-e3/...
I0622 16:05:37.393026 5900 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/06/22 16:05:37 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0622 16:05:37.404034 5900 http.go:37] curl https://ip.jsb.workers.dev
I0622 16:05:37.499525 5900 up.go:159] /logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kops create cluster --name e2e-e2e-kops-gce-stable.k8s.local --cloud gce --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.25.0-alpha.1 --ssh-public-key /tmp/kops-ssh854688831/key.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --channel=alpha --gce-service-account=default --admin-access 34.68.101.23/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west4-a --master-size e2-standard-2 --project gce-gci-upg-1-3-lat-ctl-skew
I0622 16:05:37.519506 6241 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0622 16:05:37.519598 6241 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
I0622 16:05:37.546320 6241 create_cluster.go:862] Using SSH public key: /tmp/kops-ssh854688831/key.pub
I0622 16:05:37.889826 6241 new_cluster.go:425] VMs will be configured to use specified Service Account: default
... skipping 375 lines ...
I0622 16:05:45.968064 6263 address.go:139] GCE creating address: "api-e2e-e2e-kops-gce-stable-k8s-local"
I0622 16:05:45.969641 6263 keypair.go:225] Issuing new certificate: "etcd-manager-ca-main"
I0622 16:05:46.024279 6263 keypair.go:225] Issuing new certificate: "etcd-clients-ca"
W0622 16:05:46.067492 6263 vfs_castore.go:379] CA private key was not found
I0622 16:05:46.154387 6263 keypair.go:225] Issuing new certificate: "kubernetes-ca"
I0622 16:06:01.775295 6263 executor.go:111] Tasks: 42 done / 68 total; 20 can run
W0622 16:06:18.507217 6263 executor.go:139] error running task "ForwardingRule/api-e2e-e2e-kops-gce-stable-k8s-local" (9m43s remaining to succeed): error creating ForwardingRule "api-e2e-e2e-kops-gce-stable-k8s-local": googleapi: Error 400: The resource 'projects/gce-gci-upg-1-3-lat-ctl-skew/regions/us-west4/targetPools/api-e2e-e2e-kops-gce-stable-k8s-local' is not ready, resourceNotReady
I0622 16:06:18.507362 6263 executor.go:111] Tasks: 61 done / 68 total; 5 can run
I0622 16:06:28.997558 6263 executor.go:111] Tasks: 66 done / 68 total; 2 can run
I0622 16:06:49.402925 6263 executor.go:111] Tasks: 68 done / 68 total; 0 can run
I0622 16:06:49.550849 6263 update_cluster.go:326] Exporting kubeconfig for cluster
kOps has set your kubectl context to e2e-e2e-kops-gce-stable.k8s.local
... skipping 8 lines ...
I0622 16:07:00.250820 5900 up.go:243] /logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kops validate cluster --name e2e-e2e-kops-gce-stable.k8s.local --count 10 --wait 15m0s
I0622 16:07:00.308465 6283 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0622 16:07:00.308570 6283 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-e2e-kops-gce-stable.k8s.local
W0622 16:07:30.638791 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: i/o timeout
W0622 16:07:40.682744 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused
W0622 16:07:50.729667 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused
W0622 16:08:00.775757 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused
W0622 16:08:10.821706 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused
W0622 16:08:20.865673 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused
W0622 16:08:30.911650 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused
W0622 16:08:40.954848 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused
W0622 16:08:50.999134 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused
W0622 16:09:01.047803 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused
W0622 16:09:11.093281 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused
W0622 16:09:31.137410 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": net/http: TLS handshake timeout
W0622 16:09:51.183240 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": net/http: TLS handshake timeout
W0622 16:10:02.434158 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused - error from a previous attempt: read tcp 10.60.3.75:43186->34.125.165.160:443: read: connection reset by peer
W0622 16:10:12.479730 6283 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.125.165.160/api/v1/nodes": dial tcp 34.125.165.160:443: connect: connection refused
I0622 16:10:22.993669 6283 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west4-a Master e2-standard-2 1 1 us-west4
nodes-us-west4-a Node n1-standard-2 4 4 us-west4
... skipping 5 lines ...
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/master-us-west4-a-6m23 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/master-us-west4-a-6m23" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-7gg3 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-7gg3" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6" has not yet joined cluster
Validation Failed
W0622 16:10:23.949658 6283 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 16:10:34.375701 6283 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west4-a Master e2-standard-2 1 1 us-west4
nodes-us-west4-a Node n1-standard-2 4 4 us-west4
... skipping 6 lines ...
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/master-us-west4-a-6m23 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/master-us-west4-a-6m23" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-7gg3 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-7gg3" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6" has not yet joined cluster
Validation Failed
W0622 16:10:35.228595 6283 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 16:10:45.663127 6283 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west4-a Master e2-standard-2 1 1 us-west4
nodes-us-west4-a Node n1-standard-2 4 4 us-west4
... skipping 6 lines ...
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/master-us-west4-a-6m23 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/master-us-west4-a-6m23" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-7gg3 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-7gg3" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6" has not yet joined cluster
Validation Failed
W0622 16:10:46.560998 6283 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 16:10:56.930809 6283 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west4-a Master e2-standard-2 1 1 us-west4
nodes-us-west4-a Node n1-standard-2 4 4 us-west4
... skipping 8 lines ...
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6" has not yet joined cluster
Pod kube-system/etcd-manager-events-master-us-west4-a-6m23 system-cluster-critical pod "etcd-manager-events-master-us-west4-a-6m23" is pending
Pod kube-system/kube-controller-manager-master-us-west4-a-6m23 system-cluster-critical pod "kube-controller-manager-master-us-west4-a-6m23" is not ready (kube-controller-manager)
Validation Failed
W0622 16:10:57.887824 6283 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 16:11:08.388624 6283 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west4-a Master e2-standard-2 1 1 us-west4
nodes-us-west4-a Node n1-standard-2 4 4 us-west4
... skipping 6 lines ...
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/master-us-west4-a-6m23 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/master-us-west4-a-6m23" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-7gg3 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-7gg3" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6" has not yet joined cluster
Validation Failed
W0622 16:11:09.306283 6283 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 16:11:19.714423 6283 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west4-a Master e2-standard-2 1 1 us-west4
nodes-us-west4-a Node n1-standard-2 4 4 us-west4
... skipping 6 lines ...
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/master-us-west4-a-6m23 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/master-us-west4-a-6m23" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-7gg3 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-7gg3" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6" has not yet joined cluster
Validation Failed
W0622 16:11:20.689426 6283 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 16:11:31.100422 6283 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west4-a Master e2-standard-2 1 1 us-west4
nodes-us-west4-a Node n1-standard-2 4 4 us-west4
... skipping 8 lines ...
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-m34f" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-r4pg" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6 machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/nodes-us-west4-a-z5t6" has not yet joined cluster
Pod kube-system/coredns-autoscaler-5d4dbc7b59-46mjh system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-46mjh" is pending
Pod kube-system/coredns-dd657c749-x6mh6 system-cluster-critical pod "coredns-dd657c749-x6mh6" is pending
Validation Failed
W0622 16:11:32.150671 6283 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 16:11:42.595090 6283 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west4-a Master e2-standard-2 1 1 us-west4
nodes-us-west4-a Node n1-standard-2 4 4 us-west4
... skipping 17 lines ...
Pod kube-system/kube-proxy-nodes-us-west4-a-7gg3 system-node-critical pod "kube-proxy-nodes-us-west4-a-7gg3" is pending
Pod kube-system/kube-proxy-nodes-us-west4-a-r4pg system-node-critical pod "kube-proxy-nodes-us-west4-a-r4pg" is pending
Pod kube-system/kube-scheduler-master-us-west4-a-6m23 system-cluster-critical pod "kube-scheduler-master-us-west4-a-6m23" is pending
Pod kube-system/metadata-proxy-v0.12-ct9vq system-node-critical pod "metadata-proxy-v0.12-ct9vq" is pending
Pod kube-system/metadata-proxy-v0.12-txbkj system-node-critical pod "metadata-proxy-v0.12-txbkj" is pending
Validation Failed
W0622 16:11:43.377538 6283 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 16:11:53.821514 6283 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west4-a Master e2-standard-2 1 1 us-west4
nodes-us-west4-a Node n1-standard-2 4 4 us-west4
... skipping 14 lines ...
Node nodes-us-west4-a-z5t6 node "nodes-us-west4-a-z5t6" of role "node" is not ready
Pod kube-system/coredns-autoscaler-5d4dbc7b59-46mjh system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-46mjh" is pending
Pod kube-system/coredns-dd657c749-x6mh6 system-cluster-critical pod "coredns-dd657c749-x6mh6" is pending
Pod kube-system/metadata-proxy-v0.12-gqsqq system-node-critical pod "metadata-proxy-v0.12-gqsqq" is pending
Pod kube-system/metadata-proxy-v0.12-x6c4h system-node-critical pod "metadata-proxy-v0.12-x6c4h" is pending
Validation Failed
W0622 16:11:54.646629 6283 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 16:12:05.081484 6283 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west4-a Master e2-standard-2 1 1 us-west4
nodes-us-west4-a Node n1-standard-2 4 4 us-west4
... skipping 87 lines ...
nodes-us-west4-a-z5t6 node True
VALIDATION ERRORS
KIND NAME MESSAGE
Pod kube-system/kube-proxy-nodes-us-west4-a-z5t6 system-node-critical pod "kube-proxy-nodes-us-west4-a-z5t6" is pending
Validation Failed
W0622 16:13:02.404617 6283 validate_cluster.go:232] (will retry): cluster not yet healthy
I0622 16:13:12.793995 6283 gce_cloud.go:295] Scanning zones: [us-west4-a us-west4-b us-west4-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west4-a Master e2-standard-2 1 1 us-west4
nodes-us-west4-a Node n1-standard-2 4 4 us-west4
... skipping 183 lines ...
===================================
Random Seed: [1m1655914515[0m - Will randomize all specs
Will run [1m7042[0m specs
Running in parallel across [1m25[0m nodes
Jun 22 16:15:32.280: INFO: lookupDiskImageSources: gcloud error with [[]string{"instance-groups", "list-instances", "", "--format=get(instance)"}]; err:exit status 1
Jun 22 16:15:32.280: INFO: > ERROR: (gcloud.compute.instance-groups.list-instances) could not parse resource []
Jun 22 16:15:32.280: INFO: >
Jun 22 16:15:32.280: INFO: Cluster image sources lookup failed: exit status 1
Jun 22 16:15:32.280: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:15:32.281: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun 22 16:15:32.493: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun 22 16:15:32.642: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jun 22 16:15:32.642: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
... skipping 1052 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:15:33.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "conformance-tests-778" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:33.726: INFO: Only supported for providers [aws] (not gce)
... skipping 77 lines ...
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1577
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":1,"skipped":4,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:33.775: INFO: Only supported for providers [azure] (not gce)
... skipping 125 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:15:33.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "metrics-grabber-9497" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":1,"skipped":1,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 9 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:15:34.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "tables-5752" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:34.358: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/framework/framework.go:187
... skipping 2 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: dir-link]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 52 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide container's memory request [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 22 16:15:33.252: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26b07c16-4f45-47aa-833f-a4487a6d3546" in namespace "downward-api-5818" to be "Succeeded or Failed"
Jun 22 16:15:33.301: INFO: Pod "downwardapi-volume-26b07c16-4f45-47aa-833f-a4487a6d3546": Phase="Pending", Reason="", readiness=false. Elapsed: 48.911053ms
Jun 22 16:15:35.346: INFO: Pod "downwardapi-volume-26b07c16-4f45-47aa-833f-a4487a6d3546": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093899086s
Jun 22 16:15:37.346: INFO: Pod "downwardapi-volume-26b07c16-4f45-47aa-833f-a4487a6d3546": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09363098s
Jun 22 16:15:39.347: INFO: Pod "downwardapi-volume-26b07c16-4f45-47aa-833f-a4487a6d3546": Phase="Running", Reason="", readiness=false. Elapsed: 6.094745987s
Jun 22 16:15:41.352: INFO: Pod "downwardapi-volume-26b07c16-4f45-47aa-833f-a4487a6d3546": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099750128s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:41.352: INFO: Pod "downwardapi-volume-26b07c16-4f45-47aa-833f-a4487a6d3546" satisfied condition "Succeeded or Failed"
Jun 22 16:15:41.397: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod downwardapi-volume-26b07c16-4f45-47aa-833f-a4487a6d3546 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:41.502: INFO: Waiting for pod downwardapi-volume-26b07c16-4f45-47aa-833f-a4487a6d3546 to disappear
Jun 22 16:15:41.546: INFO: Pod downwardapi-volume-26b07c16-4f45-47aa-833f-a4487a6d3546 no longer exists
[AfterEach] [sig-storage] Downward API volume
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.775 seconds][0m
[sig-storage] Downward API volume
[90mtest/e2e/common/storage/framework.go:23[0m
should provide container's memory request [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:41.685: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 38 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when scheduling a busybox command that always fails in a pod
[90mtest/e2e/common/node/kubelet.go:81[0m
should have an terminated reason [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:41.813: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
test/e2e/framework/framework.go:187
... skipping 87 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap configmap-2280/configmap-test-634005a0-3c79-44fe-9bc4-02c8ac589d9f
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 16:15:33.520: INFO: Waiting up to 5m0s for pod "pod-configmaps-1639b0f4-7bd2-4ae9-a3e6-f8e6612bf358" in namespace "configmap-2280" to be "Succeeded or Failed"
Jun 22 16:15:33.579: INFO: Pod "pod-configmaps-1639b0f4-7bd2-4ae9-a3e6-f8e6612bf358": Phase="Pending", Reason="", readiness=false. Elapsed: 59.205815ms
Jun 22 16:15:35.626: INFO: Pod "pod-configmaps-1639b0f4-7bd2-4ae9-a3e6-f8e6612bf358": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106105324s
Jun 22 16:15:37.626: INFO: Pod "pod-configmaps-1639b0f4-7bd2-4ae9-a3e6-f8e6612bf358": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106032714s
Jun 22 16:15:39.629: INFO: Pod "pod-configmaps-1639b0f4-7bd2-4ae9-a3e6-f8e6612bf358": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109217906s
Jun 22 16:15:41.626: INFO: Pod "pod-configmaps-1639b0f4-7bd2-4ae9-a3e6-f8e6612bf358": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105879601s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:41.626: INFO: Pod "pod-configmaps-1639b0f4-7bd2-4ae9-a3e6-f8e6612bf358" satisfied condition "Succeeded or Failed"
Jun 22 16:15:41.672: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-configmaps-1639b0f4-7bd2-4ae9-a3e6-f8e6612bf358 container env-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:41.776: INFO: Waiting for pod pod-configmaps-1639b0f4-7bd2-4ae9-a3e6-f8e6612bf358 to disappear
Jun 22 16:15:41.823: INFO: Pod pod-configmaps-1639b0f4-7bd2-4ae9-a3e6-f8e6612bf358 no longer exists
[AfterEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.972 seconds][0m
[sig-node] ConfigMap
[90mtest/e2e/common/node/framework.go:23[0m
should be consumable via environment variable [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:42.005: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 22 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir volume type on tmpfs
Jun 22 16:15:33.329: INFO: Waiting up to 5m0s for pod "pod-ebab8009-c402-4ceb-834b-8f7143ffed21" in namespace "emptydir-5819" to be "Succeeded or Failed"
Jun 22 16:15:33.395: INFO: Pod "pod-ebab8009-c402-4ceb-834b-8f7143ffed21": Phase="Pending", Reason="", readiness=false. Elapsed: 66.639466ms
Jun 22 16:15:35.443: INFO: Pod "pod-ebab8009-c402-4ceb-834b-8f7143ffed21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114159069s
Jun 22 16:15:37.442: INFO: Pod "pod-ebab8009-c402-4ceb-834b-8f7143ffed21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113604193s
Jun 22 16:15:39.445: INFO: Pod "pod-ebab8009-c402-4ceb-834b-8f7143ffed21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116597914s
Jun 22 16:15:41.443: INFO: Pod "pod-ebab8009-c402-4ceb-834b-8f7143ffed21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.114543282s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:41.443: INFO: Pod "pod-ebab8009-c402-4ceb-834b-8f7143ffed21" satisfied condition "Succeeded or Failed"
Jun 22 16:15:41.493: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod pod-ebab8009-c402-4ceb-834b-8f7143ffed21 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:42.020: INFO: Waiting for pod pod-ebab8009-c402-4ceb-834b-8f7143ffed21 to disappear
Jun 22 16:15:42.068: INFO: Pod pod-ebab8009-c402-4ceb-834b-8f7143ffed21 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:9.262 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:42.256: INFO: Only supported for providers [aws] (not gce)
... skipping 82 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:15:43.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "discovery-99" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":2,"skipped":10,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:43.521: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 46 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/projected_configmap.go:112
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-ffe0a72c-f574-4e0b-9776-2840223a6b94
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 16:15:33.392: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b525de75-2e2d-40c5-a3de-0cf6db6b857e" in namespace "projected-1657" to be "Succeeded or Failed"
Jun 22 16:15:33.464: INFO: Pod "pod-projected-configmaps-b525de75-2e2d-40c5-a3de-0cf6db6b857e": Phase="Pending", Reason="", readiness=false. Elapsed: 71.518012ms
Jun 22 16:15:35.510: INFO: Pod "pod-projected-configmaps-b525de75-2e2d-40c5-a3de-0cf6db6b857e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117922194s
Jun 22 16:15:37.509: INFO: Pod "pod-projected-configmaps-b525de75-2e2d-40c5-a3de-0cf6db6b857e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116921872s
Jun 22 16:15:39.516: INFO: Pod "pod-projected-configmaps-b525de75-2e2d-40c5-a3de-0cf6db6b857e": Phase="Running", Reason="", readiness=true. Elapsed: 6.123943305s
Jun 22 16:15:41.509: INFO: Pod "pod-projected-configmaps-b525de75-2e2d-40c5-a3de-0cf6db6b857e": Phase="Running", Reason="", readiness=false. Elapsed: 8.116637361s
Jun 22 16:15:43.509: INFO: Pod "pod-projected-configmaps-b525de75-2e2d-40c5-a3de-0cf6db6b857e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116129532s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:43.509: INFO: Pod "pod-projected-configmaps-b525de75-2e2d-40c5-a3de-0cf6db6b857e" satisfied condition "Succeeded or Failed"
Jun 22 16:15:43.552: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-projected-configmaps-b525de75-2e2d-40c5-a3de-0cf6db6b857e container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:43.659: INFO: Waiting for pod pod-projected-configmaps-b525de75-2e2d-40c5-a3de-0cf6db6b857e to disappear
Jun 22 16:15:43.707: INFO: Pod pod-projected-configmaps-b525de75-2e2d-40c5-a3de-0cf6db6b857e no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.870 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/projected_configmap.go:112[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":5,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:43.884: INFO: Only supported for providers [azure] (not gce)
... skipping 54 lines ...
Jun 22 16:15:35.405: INFO: The phase of Pod server-envvars-7601c0a5-bcc2-49c6-8280-ad0a29029df8 is Pending, waiting for it to be Running (with Ready = true)
Jun 22 16:15:37.405: INFO: Pod "server-envvars-7601c0a5-bcc2-49c6-8280-ad0a29029df8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104112418s
Jun 22 16:15:37.405: INFO: The phase of Pod server-envvars-7601c0a5-bcc2-49c6-8280-ad0a29029df8 is Pending, waiting for it to be Running (with Ready = true)
Jun 22 16:15:39.409: INFO: Pod "server-envvars-7601c0a5-bcc2-49c6-8280-ad0a29029df8": Phase="Running", Reason="", readiness=true. Elapsed: 6.108140238s
Jun 22 16:15:39.409: INFO: The phase of Pod server-envvars-7601c0a5-bcc2-49c6-8280-ad0a29029df8 is Running (Ready = true)
Jun 22 16:15:39.409: INFO: Pod "server-envvars-7601c0a5-bcc2-49c6-8280-ad0a29029df8" satisfied condition "running and ready"
Jun 22 16:15:39.550: INFO: Waiting up to 5m0s for pod "client-envvars-b2b6fc48-8e82-4c95-a5e4-3f5186398d0d" in namespace "pods-5379" to be "Succeeded or Failed"
Jun 22 16:15:39.595: INFO: Pod "client-envvars-b2b6fc48-8e82-4c95-a5e4-3f5186398d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 45.096609ms
Jun 22 16:15:41.641: INFO: Pod "client-envvars-b2b6fc48-8e82-4c95-a5e4-3f5186398d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090429686s
Jun 22 16:15:43.641: INFO: Pod "client-envvars-b2b6fc48-8e82-4c95-a5e4-3f5186398d0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090457799s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:43.641: INFO: Pod "client-envvars-b2b6fc48-8e82-4c95-a5e4-3f5186398d0d" satisfied condition "Succeeded or Failed"
Jun 22 16:15:43.686: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod client-envvars-b2b6fc48-8e82-4c95-a5e4-3f5186398d0d container env3cont: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:43.788: INFO: Waiting for pod client-envvars-b2b6fc48-8e82-4c95-a5e4-3f5186398d0d to disappear
Jun 22 16:15:43.836: INFO: Pod client-envvars-b2b6fc48-8e82-4c95-a5e4-3f5186398d0d no longer exists
[AfterEach] [sig-node] Pods
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:11.041 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should contain environment variables for services [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-55f38709-9607-4df5-a14c-044cfcd72cdd
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 16:15:33.579: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d10d72d7-2e31-45bc-afbc-266c85a4f195" in namespace "projected-4991" to be "Succeeded or Failed"
Jun 22 16:15:33.634: INFO: Pod "pod-projected-configmaps-d10d72d7-2e31-45bc-afbc-266c85a4f195": Phase="Pending", Reason="", readiness=false. Elapsed: 54.909973ms
Jun 22 16:15:35.683: INFO: Pod "pod-projected-configmaps-d10d72d7-2e31-45bc-afbc-266c85a4f195": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103422173s
Jun 22 16:15:37.681: INFO: Pod "pod-projected-configmaps-d10d72d7-2e31-45bc-afbc-266c85a4f195": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101700795s
Jun 22 16:15:39.683: INFO: Pod "pod-projected-configmaps-d10d72d7-2e31-45bc-afbc-266c85a4f195": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103397565s
Jun 22 16:15:41.684: INFO: Pod "pod-projected-configmaps-d10d72d7-2e31-45bc-afbc-266c85a4f195": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104381004s
Jun 22 16:15:43.682: INFO: Pod "pod-projected-configmaps-d10d72d7-2e31-45bc-afbc-266c85a4f195": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102608622s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:43.682: INFO: Pod "pod-projected-configmaps-d10d72d7-2e31-45bc-afbc-266c85a4f195" satisfied condition "Succeeded or Failed"
Jun 22 16:15:43.730: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod pod-projected-configmaps-d10d72d7-2e31-45bc-afbc-266c85a4f195 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:43.844: INFO: Waiting for pod pod-projected-configmaps-d10d72d7-2e31-45bc-afbc-266c85a4f195 to disappear
Jun 22 16:15:43.892: INFO: Pod pod-projected-configmaps-d10d72d7-2e31-45bc-afbc-266c85a4f195 no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.979 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:44.054: INFO: Only supported for providers [azure] (not gce)
... skipping 72 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
test/e2e/common/storage/downwardapi_volume.go:43
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 22 16:15:33.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8fd0326-863c-482d-b7c5-74482f1a5fc9" in namespace "downward-api-152" to be "Succeeded or Failed"
Jun 22 16:15:33.363: INFO: Pod "downwardapi-volume-c8fd0326-863c-482d-b7c5-74482f1a5fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 59.138895ms
Jun 22 16:15:35.410: INFO: Pod "downwardapi-volume-c8fd0326-863c-482d-b7c5-74482f1a5fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106140537s
Jun 22 16:15:37.410: INFO: Pod "downwardapi-volume-c8fd0326-863c-482d-b7c5-74482f1a5fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106074428s
Jun 22 16:15:39.413: INFO: Pod "downwardapi-volume-c8fd0326-863c-482d-b7c5-74482f1a5fc9": Phase="Running", Reason="", readiness=false. Elapsed: 6.109793191s
Jun 22 16:15:41.413: INFO: Pod "downwardapi-volume-c8fd0326-863c-482d-b7c5-74482f1a5fc9": Phase="Running", Reason="", readiness=false. Elapsed: 8.10955776s
Jun 22 16:15:43.413: INFO: Pod "downwardapi-volume-c8fd0326-863c-482d-b7c5-74482f1a5fc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109121441s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:43.413: INFO: Pod "downwardapi-volume-c8fd0326-863c-482d-b7c5-74482f1a5fc9" satisfied condition "Succeeded or Failed"
Jun 22 16:15:43.460: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod downwardapi-volume-c8fd0326-863c-482d-b7c5-74482f1a5fc9 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:43.968: INFO: Waiting for pod downwardapi-volume-c8fd0326-863c-482d-b7c5-74482f1a5fc9 to disappear
Jun 22 16:15:44.016: INFO: Pod downwardapi-volume-c8fd0326-863c-482d-b7c5-74482f1a5fc9 no longer exists
[AfterEach] [sig-storage] Downward API volume
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:11.249 seconds][0m
[sig-storage] Downward API volume
[90mtest/e2e/common/storage/framework.go:23[0m
should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Containers
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:15:33.930: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename containers
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test override all
Jun 22 16:15:34.288: INFO: Waiting up to 5m0s for pod "client-containers-71fb5100-d672-4438-8378-59e53854aabc" in namespace "containers-1789" to be "Succeeded or Failed"
Jun 22 16:15:34.338: INFO: Pod "client-containers-71fb5100-d672-4438-8378-59e53854aabc": Phase="Pending", Reason="", readiness=false. Elapsed: 50.23503ms
Jun 22 16:15:36.385: INFO: Pod "client-containers-71fb5100-d672-4438-8378-59e53854aabc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097171748s
Jun 22 16:15:38.383: INFO: Pod "client-containers-71fb5100-d672-4438-8378-59e53854aabc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094894981s
Jun 22 16:15:40.381: INFO: Pod "client-containers-71fb5100-d672-4438-8378-59e53854aabc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093143535s
Jun 22 16:15:42.382: INFO: Pod "client-containers-71fb5100-d672-4438-8378-59e53854aabc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093515691s
Jun 22 16:15:44.394: INFO: Pod "client-containers-71fb5100-d672-4438-8378-59e53854aabc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106120834s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:44.394: INFO: Pod "client-containers-71fb5100-d672-4438-8378-59e53854aabc" satisfied condition "Succeeded or Failed"
Jun 22 16:15:44.444: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod client-containers-71fb5100-d672-4438-8378-59e53854aabc container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:44.536: INFO: Waiting for pod client-containers-71fb5100-d672-4438-8378-59e53854aabc to disappear
Jun 22 16:15:44.578: INFO: Pod client-containers-71fb5100-d672-4438-8378-59e53854aabc no longer exists
[AfterEach] [sig-node] Containers
test/e2e/framework/framework.go:187
... skipping 15 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
test/e2e/common/node/security_context.go:369
Jun 22 16:15:33.396: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-a90a64af-b1d7-4950-9456-8341835b41e5" in namespace "security-context-test-9915" to be "Succeeded or Failed"
Jun 22 16:15:33.480: INFO: Pod "alpine-nnp-true-a90a64af-b1d7-4950-9456-8341835b41e5": Phase="Pending", Reason="", readiness=false. Elapsed: 84.794509ms
Jun 22 16:15:35.533: INFO: Pod "alpine-nnp-true-a90a64af-b1d7-4950-9456-8341835b41e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137672108s
Jun 22 16:15:37.531: INFO: Pod "alpine-nnp-true-a90a64af-b1d7-4950-9456-8341835b41e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135032681s
Jun 22 16:15:39.529: INFO: Pod "alpine-nnp-true-a90a64af-b1d7-4950-9456-8341835b41e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133220291s
Jun 22 16:15:41.527: INFO: Pod "alpine-nnp-true-a90a64af-b1d7-4950-9456-8341835b41e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131406493s
Jun 22 16:15:43.527: INFO: Pod "alpine-nnp-true-a90a64af-b1d7-4950-9456-8341835b41e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131740963s
Jun 22 16:15:43.527: INFO: Pod "alpine-nnp-true-a90a64af-b1d7-4950-9456-8341835b41e5" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 22 16:15:44.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-9915" for this suite.
... skipping 2 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when creating containers with AllowPrivilegeEscalation
[90mtest/e2e/common/node/security_context.go:298[0m
should allow privilege escalation when true [LinuxOnly] [NodeConformance]
[90mtest/e2e/common/node/security_context.go:369[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:44.689: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:187
... skipping 43 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating projection with secret that has name projected-secret-test-b7a05854-009b-4e4f-a756-67cac525bd90
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 16:15:33.497: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-abf6663d-21d2-4386-bdc5-d10567f67a6a" in namespace "projected-1041" to be "Succeeded or Failed"
Jun 22 16:15:33.559: INFO: Pod "pod-projected-secrets-abf6663d-21d2-4386-bdc5-d10567f67a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 62.607839ms
Jun 22 16:15:35.605: INFO: Pod "pod-projected-secrets-abf6663d-21d2-4386-bdc5-d10567f67a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108520157s
Jun 22 16:15:37.604: INFO: Pod "pod-projected-secrets-abf6663d-21d2-4386-bdc5-d10567f67a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106938589s
Jun 22 16:15:39.604: INFO: Pod "pod-projected-secrets-abf6663d-21d2-4386-bdc5-d10567f67a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107546245s
Jun 22 16:15:41.606: INFO: Pod "pod-projected-secrets-abf6663d-21d2-4386-bdc5-d10567f67a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108795309s
Jun 22 16:15:43.607: INFO: Pod "pod-projected-secrets-abf6663d-21d2-4386-bdc5-d10567f67a6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110053069s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:43.607: INFO: Pod "pod-projected-secrets-abf6663d-21d2-4386-bdc5-d10567f67a6a" satisfied condition "Succeeded or Failed"
Jun 22 16:15:43.652: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-projected-secrets-abf6663d-21d2-4386-bdc5-d10567f67a6a container projected-secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:44.787: INFO: Waiting for pod pod-projected-secrets-abf6663d-21d2-4386-bdc5-d10567f67a6a to disappear
Jun 22 16:15:44.834: INFO: Pod pod-projected-secrets-abf6663d-21d2-4386-bdc5-d10567f67a6a no longer exists
[AfterEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:11.978 seconds][0m
[sig-storage] Projected secret
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:45.000: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 85 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:15:45.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "svcaccounts-7823" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:45.665: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 78 lines ...
[1mSTEP[0m: Destroying namespace "apply-221" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
test/e2e/apimachinery/apply.go:59
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":2,"skipped":22,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] PrivilegedPod [NodeConformance]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 43 lines ...
[32m• [SLOW TEST:15.203 seconds][0m
[sig-node] PrivilegedPod [NodeConformance]
[90mtest/e2e/common/node/framework.go:23[0m
should enable privileged commands [LinuxOnly]
[90mtest/e2e/common/node/privileged.go:52[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":1,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:48.308: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 106 lines ...
[32m• [SLOW TEST:8.923 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
[90mtest/e2e/apimachinery/resource_quota.go:1446[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [sig-node] Secrets
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:15:50.633: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename secrets
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating projection with secret that has name secret-emptykey-test-0656d28e-7191-4ff9-be1e-7ab0e40e5b76
[AfterEach] [sig-node] Secrets
test/e2e/framework/framework.go:187
Jun 22 16:15:50.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "secrets-6767" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":3,"skipped":3,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 13 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:15:51.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-5770" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":4,"skipped":4,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 9 lines ...
Jun 22 16:15:44.541: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-3942 create -f -'
Jun 22 16:15:44.909: INFO: stderr: ""
Jun 22 16:15:44.909: INFO: stdout: "pod/pause created\n"
Jun 22 16:15:44.909: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jun 22 16:15:44.910: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3942" to be "running and ready"
Jun 22 16:15:44.959: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 49.487255ms
Jun 22 16:15:44.959: INFO: Error evaluating pod condition running and ready: want pod 'pause' on 'nodes-us-west4-a-m34f' to be 'Running' but was 'Pending'
Jun 22 16:15:47.015: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10554755s
Jun 22 16:15:47.015: INFO: Error evaluating pod condition running and ready: want pod 'pause' on 'nodes-us-west4-a-m34f' to be 'Running' but was 'Pending'
Jun 22 16:15:49.009: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099009023s
Jun 22 16:15:49.009: INFO: Error evaluating pod condition running and ready: want pod 'pause' on 'nodes-us-west4-a-m34f' to be 'Running' but was 'Pending'
Jun 22 16:15:51.012: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.102693414s
Jun 22 16:15:51.012: INFO: Pod "pause" satisfied condition "running and ready"
Jun 22 16:15:51.012: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: adding the label testing-label with value testing-label-value to a pod
... skipping 35 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl label
[90mtest/e2e/kubectl/kubectl.go:1481[0m
should update the label on a resource [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:52.762: INFO: Only supported for providers [openstack] (not gce)
... skipping 124 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":24,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:57.585: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 109 lines ...
Jun 22 16:15:42.592: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086454694s
Jun 22 16:15:44.593: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 10.086834284s
Jun 22 16:15:44.593: INFO: Pod "test-pod" satisfied condition "running"
[1mSTEP[0m: Creating statefulset with conflicting port in namespace statefulset-9179
[1mSTEP[0m: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9179
Jun 22 16:15:44.703: INFO: Observed stateful pod in namespace: statefulset-9179, name: ss-0, uid: fe88d852-5ffe-4346-b8b8-4ba2833ee4e9, status phase: Pending. Waiting for statefulset controller to delete.
Jun 22 16:15:44.746: INFO: Observed stateful pod in namespace: statefulset-9179, name: ss-0, uid: fe88d852-5ffe-4346-b8b8-4ba2833ee4e9, status phase: Failed. Waiting for statefulset controller to delete.
Jun 22 16:15:44.746: INFO: Observed stateful pod in namespace: statefulset-9179, name: ss-0, uid: fe88d852-5ffe-4346-b8b8-4ba2833ee4e9, status phase: Failed. Waiting for statefulset controller to delete.
Jun 22 16:15:44.747: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9179
[1mSTEP[0m: Removing pod with conflicting port in namespace statefulset-9179
[1mSTEP[0m: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9179 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
test/e2e/apps/statefulset.go:122
Jun 22 16:15:48.942: INFO: Deleting all statefulset in ns statefulset-9179
... skipping 22 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/configmap_volume.go:61
[1mSTEP[0m: Creating configMap with name configmap-test-volume-9d5805d2-f7a0-43c5-ae86-1af095d71b5f
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 16:15:53.233: INFO: Waiting up to 5m0s for pod "pod-configmaps-49798900-1cba-4dc9-8e3f-57e8dd8bc38b" in namespace "configmap-3810" to be "Succeeded or Failed"
Jun 22 16:15:53.281: INFO: Pod "pod-configmaps-49798900-1cba-4dc9-8e3f-57e8dd8bc38b": Phase="Pending", Reason="", readiness=false. Elapsed: 48.534619ms
Jun 22 16:15:55.331: INFO: Pod "pod-configmaps-49798900-1cba-4dc9-8e3f-57e8dd8bc38b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098287853s
Jun 22 16:15:57.330: INFO: Pod "pod-configmaps-49798900-1cba-4dc9-8e3f-57e8dd8bc38b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097186152s
Jun 22 16:15:59.333: INFO: Pod "pod-configmaps-49798900-1cba-4dc9-8e3f-57e8dd8bc38b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100106271s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:59.333: INFO: Pod "pod-configmaps-49798900-1cba-4dc9-8e3f-57e8dd8bc38b" satisfied condition "Succeeded or Failed"
Jun 22 16:15:59.383: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-configmaps-49798900-1cba-4dc9-8e3f-57e8dd8bc38b container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:59.493: INFO: Waiting for pod pod-configmaps-49798900-1cba-4dc9-8e3f-57e8dd8bc38b to disappear
Jun 22 16:15:59.544: INFO: Pod pod-configmaps-49798900-1cba-4dc9-8e3f-57e8dd8bc38b no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.870 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/configmap_volume.go:61[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":11,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:15:59.676: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 83 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:15:45.031: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename projected
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 22 16:15:45.380: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d" in namespace "projected-817" to be "Succeeded or Failed"
Jun 22 16:15:45.442: INFO: Pod "downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d": Phase="Pending", Reason="", readiness=false. Elapsed: 61.405846ms
Jun 22 16:15:47.505: INFO: Pod "downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124518915s
Jun 22 16:15:49.488: INFO: Pod "downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107370928s
Jun 22 16:15:51.486: INFO: Pod "downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105263264s
Jun 22 16:15:53.486: INFO: Pod "downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105390743s
Jun 22 16:15:55.487: INFO: Pod "downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d": Phase="Running", Reason="", readiness=true. Elapsed: 10.106291823s
Jun 22 16:15:57.487: INFO: Pod "downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d": Phase="Running", Reason="", readiness=true. Elapsed: 12.106700431s
Jun 22 16:15:59.488: INFO: Pod "downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.107597343s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:59.488: INFO: Pod "downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d" satisfied condition "Succeeded or Failed"
Jun 22 16:15:59.533: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:59.627: INFO: Waiting for pod downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d to disappear
Jun 22 16:15:59.670: INFO: Pod downwardapi-volume-ab9ae511-dac7-43de-8690-b4ea9110711d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:14.738 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ServerSideApply
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 23 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:221
Jun 22 16:15:33.272: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 16:15:33.468: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2570" in namespace "provisioning-2570" to be "Succeeded or Failed"
Jun 22 16:15:33.547: INFO: Pod "hostpath-symlink-prep-provisioning-2570": Phase="Pending", Reason="", readiness=false. Elapsed: 78.882026ms
Jun 22 16:15:35.595: INFO: Pod "hostpath-symlink-prep-provisioning-2570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12624716s
Jun 22 16:15:37.594: INFO: Pod "hostpath-symlink-prep-provisioning-2570": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125944261s
Jun 22 16:15:39.595: INFO: Pod "hostpath-symlink-prep-provisioning-2570": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127060237s
Jun 22 16:15:41.595: INFO: Pod "hostpath-symlink-prep-provisioning-2570": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126910118s
Jun 22 16:15:43.607: INFO: Pod "hostpath-symlink-prep-provisioning-2570": Phase="Pending", Reason="", readiness=false. Elapsed: 10.138717646s
Jun 22 16:15:45.596: INFO: Pod "hostpath-symlink-prep-provisioning-2570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.127944784s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:45.596: INFO: Pod "hostpath-symlink-prep-provisioning-2570" satisfied condition "Succeeded or Failed"
Jun 22 16:15:45.596: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2570" in namespace "provisioning-2570"
Jun 22 16:15:45.651: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2570" to be fully deleted
Jun 22 16:15:45.728: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-sk2g
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:15:45.796: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-sk2g" in namespace "provisioning-2570" to be "Succeeded or Failed"
Jun 22 16:15:45.847: INFO: Pod "pod-subpath-test-inlinevolume-sk2g": Phase="Pending", Reason="", readiness=false. Elapsed: 50.273682ms
Jun 22 16:15:47.896: INFO: Pod "pod-subpath-test-inlinevolume-sk2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099309501s
Jun 22 16:15:49.897: INFO: Pod "pod-subpath-test-inlinevolume-sk2g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100878332s
Jun 22 16:15:51.895: INFO: Pod "pod-subpath-test-inlinevolume-sk2g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099209013s
Jun 22 16:15:53.895: INFO: Pod "pod-subpath-test-inlinevolume-sk2g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098389136s
Jun 22 16:15:55.894: INFO: Pod "pod-subpath-test-inlinevolume-sk2g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097502042s
[1mSTEP[0m: Saw pod success
Jun 22 16:15:55.894: INFO: Pod "pod-subpath-test-inlinevolume-sk2g" satisfied condition "Succeeded or Failed"
Jun 22 16:15:55.947: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod pod-subpath-test-inlinevolume-sk2g container test-container-subpath-inlinevolume-sk2g: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:15:56.070: INFO: Waiting for pod pod-subpath-test-inlinevolume-sk2g to disappear
Jun 22 16:15:56.118: INFO: Pod pod-subpath-test-inlinevolume-sk2g no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-sk2g
Jun 22 16:15:56.118: INFO: Deleting pod "pod-subpath-test-inlinevolume-sk2g" in namespace "provisioning-2570"
[1mSTEP[0m: Deleting pod
Jun 22 16:15:56.164: INFO: Deleting pod "pod-subpath-test-inlinevolume-sk2g" in namespace "provisioning-2570"
Jun 22 16:15:56.258: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2570" in namespace "provisioning-2570" to be "Succeeded or Failed"
Jun 22 16:15:56.304: INFO: Pod "hostpath-symlink-prep-provisioning-2570": Phase="Pending", Reason="", readiness=false. Elapsed: 45.378879ms
Jun 22 16:15:58.351: INFO: Pod "hostpath-symlink-prep-provisioning-2570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092479048s
Jun 22 16:16:00.353: INFO: Pod "hostpath-symlink-prep-provisioning-2570": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094631409s
Jun 22 16:16:02.351: INFO: Pod "hostpath-symlink-prep-provisioning-2570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093276011s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:02.352: INFO: Pod "hostpath-symlink-prep-provisioning-2570" satisfied condition "Succeeded or Failed"
Jun 22 16:16:02.352: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2570" in namespace "provisioning-2570"
Jun 22 16:16:02.404: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2570" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
Jun 22 16:16:02.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-2570" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:02.583: INFO: Only supported for providers [azure] (not gce)
... skipping 84 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":55,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:03.599: INFO: Only supported for providers [azure] (not gce)
... skipping 27 lines ...
[It] should support file as subpath [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:232
Jun 22 16:15:33.102: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 16:15:33.152: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-fxpq
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 16:15:33.202: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fxpq" in namespace "provisioning-9278" to be "Succeeded or Failed"
Jun 22 16:15:33.244: INFO: Pod "pod-subpath-test-inlinevolume-fxpq": Phase="Pending", Reason="", readiness=false. Elapsed: 42.119283ms
Jun 22 16:15:35.292: INFO: Pod "pod-subpath-test-inlinevolume-fxpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08997117s
Jun 22 16:15:37.290: INFO: Pod "pod-subpath-test-inlinevolume-fxpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087911632s
Jun 22 16:15:39.290: INFO: Pod "pod-subpath-test-inlinevolume-fxpq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08873723s
Jun 22 16:15:41.290: INFO: Pod "pod-subpath-test-inlinevolume-fxpq": Phase="Running", Reason="", readiness=true. Elapsed: 8.087924375s
Jun 22 16:15:43.290: INFO: Pod "pod-subpath-test-inlinevolume-fxpq": Phase="Running", Reason="", readiness=true. Elapsed: 10.088482545s
... skipping 5 lines ...
Jun 22 16:15:55.289: INFO: Pod "pod-subpath-test-inlinevolume-fxpq": Phase="Running", Reason="", readiness=true. Elapsed: 22.087097112s
Jun 22 16:15:57.291: INFO: Pod "pod-subpath-test-inlinevolume-fxpq": Phase="Running", Reason="", readiness=true. Elapsed: 24.089179476s
Jun 22 16:15:59.287: INFO: Pod "pod-subpath-test-inlinevolume-fxpq": Phase="Running", Reason="", readiness=true. Elapsed: 26.085714786s
Jun 22 16:16:01.287: INFO: Pod "pod-subpath-test-inlinevolume-fxpq": Phase="Running", Reason="", readiness=true. Elapsed: 28.085559778s
Jun 22 16:16:03.290: INFO: Pod "pod-subpath-test-inlinevolume-fxpq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.088459723s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:03.290: INFO: Pod "pod-subpath-test-inlinevolume-fxpq" satisfied condition "Succeeded or Failed"
Jun 22 16:16:03.333: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-subpath-test-inlinevolume-fxpq container test-container-subpath-inlinevolume-fxpq: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:03.428: INFO: Waiting for pod pod-subpath-test-inlinevolume-fxpq to disappear
Jun 22 16:16:03.471: INFO: Pod pod-subpath-test-inlinevolume-fxpq no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-fxpq
Jun 22 16:16:03.471: INFO: Deleting pod "pod-subpath-test-inlinevolume-fxpq" in namespace "provisioning-9278"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:03.698: INFO: Only supported for providers [aws] (not gce)
... skipping 56 lines ...
[32m• [SLOW TEST:19.005 seconds][0m
[sig-apps] Job
[90mtest/e2e/apps/framework.go:23[0m
should manage the lifecycle of a job [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should manage the lifecycle of a job [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:04.787: INFO: Only supported for providers [vsphere] (not gce)
... skipping 46 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
Jun 22 16:16:00.160: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0c1585ca-db6a-4d25-9d7a-4b45b9d0b3f1" in namespace "security-context-test-858" to be "Succeeded or Failed"
Jun 22 16:16:00.209: INFO: Pod "busybox-privileged-false-0c1585ca-db6a-4d25-9d7a-4b45b9d0b3f1": Phase="Pending", Reason="", readiness=false. Elapsed: 48.963207ms
Jun 22 16:16:02.259: INFO: Pod "busybox-privileged-false-0c1585ca-db6a-4d25-9d7a-4b45b9d0b3f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098404407s
Jun 22 16:16:04.259: INFO: Pod "busybox-privileged-false-0c1585ca-db6a-4d25-9d7a-4b45b9d0b3f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098333328s
Jun 22 16:16:06.261: INFO: Pod "busybox-privileged-false-0c1585ca-db6a-4d25-9d7a-4b45b9d0b3f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100404351s
Jun 22 16:16:06.261: INFO: Pod "busybox-privileged-false-0c1585ca-db6a-4d25-9d7a-4b45b9d0b3f1" satisfied condition "Succeeded or Failed"
Jun 22 16:16:06.314: INFO: Got logs for pod "busybox-privileged-false-0c1585ca-db6a-4d25-9d7a-4b45b9d0b3f1": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 22 16:16:06.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-858" for this suite.
... skipping 3 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
When creating a pod with privileged
[90mtest/e2e/common/node/security_context.go:234[0m
should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Conntrack
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 62 lines ...
[32m• [SLOW TEST:24.869 seconds][0m
[sig-network] Conntrack
[90mtest/e2e/network/common/framework.go:23[0m
should be able to preserve UDP traffic when server pod cycles for a NodePort service
[90mtest/e2e/network/conntrack.go:132[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":2,"skipped":37,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:07.221: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 137 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and read from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:234[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:07.463: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 113 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":1,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Networking
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 106 lines ...
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1577
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:16:00.681: INFO: >>> kubeConfig: /root/.kube/config
... skipping 20 lines ...
Jun 22 16:16:05.707: INFO: PersistentVolumeClaim pvc-7nwt9 found but phase is Pending instead of Bound.
Jun 22 16:16:07.751: INFO: PersistentVolumeClaim pvc-7nwt9 found and phase=Bound (4.133198527s)
Jun 22 16:16:07.751: INFO: Waiting up to 3m0s for PersistentVolume local-868gv to have phase Bound
Jun 22 16:16:07.794: INFO: PersistentVolume local-868gv found and phase=Bound (43.4129ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-sfn9
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:16:07.926: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-sfn9" in namespace "provisioning-222" to be "Succeeded or Failed"
Jun 22 16:16:07.971: INFO: Pod "pod-subpath-test-preprovisionedpv-sfn9": Phase="Pending", Reason="", readiness=false. Elapsed: 45.203002ms
Jun 22 16:16:10.048: INFO: Pod "pod-subpath-test-preprovisionedpv-sfn9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122020448s
Jun 22 16:16:12.018: INFO: Pod "pod-subpath-test-preprovisionedpv-sfn9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092378571s
Jun 22 16:16:14.015: INFO: Pod "pod-subpath-test-preprovisionedpv-sfn9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088919963s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:14.015: INFO: Pod "pod-subpath-test-preprovisionedpv-sfn9" satisfied condition "Succeeded or Failed"
Jun 22 16:16:14.059: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-subpath-test-preprovisionedpv-sfn9 container test-container-subpath-preprovisionedpv-sfn9: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:14.154: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-sfn9 to disappear
Jun 22 16:16:14.197: INFO: Pod pod-subpath-test-preprovisionedpv-sfn9 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-sfn9
Jun 22 16:16:14.197: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-sfn9" in namespace "provisioning-222"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90mtest/e2e/storage/testsuites/subpath.go:367[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":17,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 92 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":6,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:15.105: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 69 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating secret with name secret-test-f1a21e2d-f4e1-40ee-a9a4-0b6c3def68fa
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 16:16:06.895: INFO: Waiting up to 5m0s for pod "pod-secrets-08eea625-473d-4cc6-a4b6-c4f0b1edd52d" in namespace "secrets-8852" to be "Succeeded or Failed"
Jun 22 16:16:06.945: INFO: Pod "pod-secrets-08eea625-473d-4cc6-a4b6-c4f0b1edd52d": Phase="Pending", Reason="", readiness=false. Elapsed: 49.08141ms
Jun 22 16:16:08.995: INFO: Pod "pod-secrets-08eea625-473d-4cc6-a4b6-c4f0b1edd52d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099302608s
Jun 22 16:16:10.995: INFO: Pod "pod-secrets-08eea625-473d-4cc6-a4b6-c4f0b1edd52d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099962379s
Jun 22 16:16:12.995: INFO: Pod "pod-secrets-08eea625-473d-4cc6-a4b6-c4f0b1edd52d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099409948s
Jun 22 16:16:15.000: INFO: Pod "pod-secrets-08eea625-473d-4cc6-a4b6-c4f0b1edd52d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104402297s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:15.000: INFO: Pod "pod-secrets-08eea625-473d-4cc6-a4b6-c4f0b1edd52d" satisfied condition "Succeeded or Failed"
Jun 22 16:16:15.050: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-secrets-08eea625-473d-4cc6-a4b6-c4f0b1edd52d container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:15.157: INFO: Waiting for pod pod-secrets-08eea625-473d-4cc6-a4b6-c4f0b1edd52d to disappear
Jun 22 16:16:15.207: INFO: Pod pod-secrets-08eea625-473d-4cc6-a4b6-c4f0b1edd52d no longer exists
[AfterEach] [sig-storage] Secrets
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.868 seconds][0m
[sig-storage] Secrets
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":26,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Security Context
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
Jun 22 16:16:07.594: INFO: Waiting up to 5m0s for pod "busybox-user-65534-238e2535-2b8b-480a-a4c0-48f7b14ee295" in namespace "security-context-test-3210" to be "Succeeded or Failed"
Jun 22 16:16:07.638: INFO: Pod "busybox-user-65534-238e2535-2b8b-480a-a4c0-48f7b14ee295": Phase="Pending", Reason="", readiness=false. Elapsed: 43.49966ms
Jun 22 16:16:09.682: INFO: Pod "busybox-user-65534-238e2535-2b8b-480a-a4c0-48f7b14ee295": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087682465s
Jun 22 16:16:11.683: INFO: Pod "busybox-user-65534-238e2535-2b8b-480a-a4c0-48f7b14ee295": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088928738s
Jun 22 16:16:13.682: INFO: Pod "busybox-user-65534-238e2535-2b8b-480a-a4c0-48f7b14ee295": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087920634s
Jun 22 16:16:15.682: INFO: Pod "busybox-user-65534-238e2535-2b8b-480a-a4c0-48f7b14ee295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087197758s
Jun 22 16:16:15.682: INFO: Pod "busybox-user-65534-238e2535-2b8b-480a-a4c0-48f7b14ee295" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 22 16:16:15.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-3210" for this suite.
... skipping 2 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
When creating a container with runAsUser
[90mtest/e2e/common/node/security_context.go:52[0m
should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:15.790: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 93 lines ...
Jun 22 16:16:05.528: INFO: PersistentVolumeClaim pvc-dt6vk found but phase is Pending instead of Bound.
Jun 22 16:16:07.574: INFO: PersistentVolumeClaim pvc-dt6vk found and phase=Bound (8.234072425s)
Jun 22 16:16:07.575: INFO: Waiting up to 3m0s for PersistentVolume local-9zpq2 to have phase Bound
Jun 22 16:16:07.621: INFO: PersistentVolume local-9zpq2 found and phase=Bound (46.134931ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-9flm
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:16:07.778: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9flm" in namespace "provisioning-8232" to be "Succeeded or Failed"
Jun 22 16:16:07.825: INFO: Pod "pod-subpath-test-preprovisionedpv-9flm": Phase="Pending", Reason="", readiness=false. Elapsed: 47.091921ms
Jun 22 16:16:09.875: INFO: Pod "pod-subpath-test-preprovisionedpv-9flm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097104904s
Jun 22 16:16:11.873: INFO: Pod "pod-subpath-test-preprovisionedpv-9flm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09483362s
Jun 22 16:16:13.875: INFO: Pod "pod-subpath-test-preprovisionedpv-9flm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097243073s
Jun 22 16:16:15.875: INFO: Pod "pod-subpath-test-preprovisionedpv-9flm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096462491s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:15.875: INFO: Pod "pod-subpath-test-preprovisionedpv-9flm" satisfied condition "Succeeded or Failed"
Jun 22 16:16:15.924: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-subpath-test-preprovisionedpv-9flm container test-container-subpath-preprovisionedpv-9flm: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:16.031: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9flm to disappear
Jun 22 16:16:16.078: INFO: Pod pod-subpath-test-preprovisionedpv-9flm no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-9flm
Jun 22 16:16:16.078: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9flm" in namespace "provisioning-8232"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":28,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:16.824: INFO: Only supported for providers [azure] (not gce)
... skipping 35 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:16:16.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "apf-5859" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":6,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:16.919: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 165 lines ...
[1mSTEP[0m: Building a namespace api object, basename var-expansion
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test substitution in container's args
Jun 22 16:16:11.289: INFO: Waiting up to 5m0s for pod "var-expansion-f8567ff3-a7a2-4649-8777-77da5edc652c" in namespace "var-expansion-1489" to be "Succeeded or Failed"
Jun 22 16:16:11.335: INFO: Pod "var-expansion-f8567ff3-a7a2-4649-8777-77da5edc652c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.960557ms
Jun 22 16:16:13.381: INFO: Pod "var-expansion-f8567ff3-a7a2-4649-8777-77da5edc652c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09213534s
Jun 22 16:16:15.382: INFO: Pod "var-expansion-f8567ff3-a7a2-4649-8777-77da5edc652c": Phase="Running", Reason="", readiness=true. Elapsed: 4.093114547s
Jun 22 16:16:17.381: INFO: Pod "var-expansion-f8567ff3-a7a2-4649-8777-77da5edc652c": Phase="Running", Reason="", readiness=false. Elapsed: 6.092719292s
Jun 22 16:16:19.381: INFO: Pod "var-expansion-f8567ff3-a7a2-4649-8777-77da5edc652c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092111438s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:19.381: INFO: Pod "var-expansion-f8567ff3-a7a2-4649-8777-77da5edc652c" satisfied condition "Succeeded or Failed"
Jun 22 16:16:19.427: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod var-expansion-f8567ff3-a7a2-4649-8777-77da5edc652c container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:19.528: INFO: Waiting for pod var-expansion-f8567ff3-a7a2-4649-8777-77da5edc652c to disappear
Jun 22 16:16:19.575: INFO: Pod var-expansion-f8567ff3-a7a2-4649-8777-77da5edc652c no longer exists
[AfterEach] [sig-node] Variable Expansion
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.761 seconds][0m
[sig-node] Variable Expansion
[90mtest/e2e/common/node/framework.go:23[0m
should allow substituting values in a container's args [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:19.735: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 133 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:16:20.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "proxy-2289" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":5,"skipped":13,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:21.082: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 166 lines ...
Jun 22 16:15:42.631: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-1715 create -f -'
Jun 22 16:15:43.381: INFO: stderr: ""
Jun 22 16:15:43.381: INFO: stdout: "pod/httpd created\n"
Jun 22 16:15:43.381: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 22 16:15:43.381: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-1715" to be "running and ready"
Jun 22 16:15:43.427: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 46.124413ms
Jun 22 16:15:43.427: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-7gg3' to be 'Running' but was 'Pending'
Jun 22 16:15:45.486: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104958111s
Jun 22 16:15:45.486: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-7gg3' to be 'Running' but was 'Pending'
Jun 22 16:15:47.497: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116153929s
Jun 22 16:15:47.497: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-7gg3' to be 'Running' but was 'Pending'
Jun 22 16:15:49.481: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099535337s
Jun 22 16:15:49.481: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-7gg3' to be 'Running' but was 'Pending'
Jun 22 16:15:51.475: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093695362s
Jun 22 16:15:51.475: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-7gg3' to be 'Running' but was 'Pending'
Jun 22 16:15:53.475: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093909404s
Jun 22 16:15:53.475: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-7gg3' to be 'Running' but was 'Pending'
Jun 22 16:15:55.474: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.092825853s
Jun 22 16:15:55.474: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-7gg3' to be 'Running' but was 'Pending'
Jun 22 16:15:57.474: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.093204316s
Jun 22 16:15:57.474: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-7gg3' to be 'Running' but was 'Pending'
Jun 22 16:15:59.476: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.094413765s
Jun 22 16:15:59.476: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-7gg3' to be 'Running' but was 'Pending'
Jun 22 16:16:01.474: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.092749541s
Jun 22 16:16:01.474: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west4-a-7gg3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC }]
Jun 22 16:16:03.475: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.093700828s
Jun 22 16:16:03.475: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west4-a-7gg3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC }]
Jun 22 16:16:05.474: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.09328007s
Jun 22 16:16:05.475: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west4-a-7gg3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC }]
Jun 22 16:16:07.474: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.093026654s
Jun 22 16:16:07.474: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west4-a-7gg3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:15:43 +0000 UTC }]
Jun 22 16:16:09.476: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 26.09443964s
Jun 22 16:16:09.476: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 22 16:16:09.476: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should handle in-cluster config
test/e2e/kubectl/kubectl.go:682
[1mSTEP[0m: adding rbac permissions
... skipping 67 lines ...
[1mSTEP[0m: creating an object not containing a namespace with in-cluster config
Jun 22 16:16:17.256: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-1715 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Jun 22 16:16:17.958: INFO: rc: 1
[1mSTEP[0m: trying to use kubectl with invalid token
Jun 22 16:16:17.959: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-1715 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Jun 22 16:16:18.592: INFO: rc: 1
Jun 22 16:16:18.593: INFO: got err error running /logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-1715 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0622 16:16:18.536496 184 merged_client_builder.go:163] Using in-cluster namespace
I0622 16:16:18.536715 184 merged_client_builder.go:121] Using in-cluster configuration
I0622 16:16:18.543203 184 merged_client_builder.go:121] Using in-cluster configuration
I0622 16:16:18.543769 184 round_trippers.go:463] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-1715/pods?limit=500
I0622 16:16:18.543787 184 round_trippers.go:469] Request Headers:
... skipping 7 lines ...
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}]
error: You must be logged in to the server (Unauthorized)
stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 1
error:
exit status 1
[1mSTEP[0m: trying to use kubectl with invalid server
Jun 22 16:16:18.593: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-1715 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Jun 22 16:16:19.329: INFO: rc: 1
Jun 22 16:16:19.329: INFO: got err error running /logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-1715 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0622 16:16:19.176784 194 merged_client_builder.go:163] Using in-cluster namespace
I0622 16:16:19.257068 194 round_trippers.go:553] GET http://invalid/api?timeout=32s in 79 milliseconds
I0622 16:16:19.257238 194 cached_discovery.go:119] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0622 16:16:19.279026 194 round_trippers.go:553] GET http://invalid/api?timeout=32s in 21 milliseconds
I0622 16:16:19.279159 194 cached_discovery.go:119] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0622 16:16:19.279418 194 shortcut.go:100] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0622 16:16:19.282376 194 round_trippers.go:553] GET http://invalid/api?timeout=32s in 2 milliseconds
I0622 16:16:19.282932 194 cached_discovery.go:119] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0622 16:16:19.294377 194 round_trippers.go:553] GET http://invalid/api?timeout=32s in 11 milliseconds
I0622 16:16:19.294682 194 cached_discovery.go:119] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0622 16:16:19.297654 194 round_trippers.go:553] GET http://invalid/api?timeout=32s in 2 milliseconds
I0622 16:16:19.297707 194 cached_discovery.go:119] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0622 16:16:19.298126 194 helpers.go:240] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: no such host
Unable to connect to the server: dial tcp: lookup invalid on 100.64.0.10:53: no such host
stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 1
error:
exit status 1
[1mSTEP[0m: trying to use kubectl with invalid namespace
Jun 22 16:16:19.329: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-1715 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Jun 22 16:16:19.994: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Jun 22 16:16:19.994: INFO: stdout: "I0622 16:16:19.922915 204 merged_client_builder.go:121] Using in-cluster configuration\nI0622 16:16:19.929870 204 merged_client_builder.go:121] Using in-cluster configuration\nI0622 16:16:19.949673 204 round_trippers.go:553] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 19 milliseconds\nNo resources found in invalid namespace.\n"
Jun 22 16:16:19.994: INFO: stdout: I0622 16:16:19.922915 204 merged_client_builder.go:121] Using in-cluster configuration
... skipping 68 lines ...
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
test/e2e/node/security_context.go:111
[1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 22 16:16:15.737: INFO: Waiting up to 5m0s for pod "security-context-681e070d-2149-4597-9e2e-abea059d9be3" in namespace "security-context-9952" to be "Succeeded or Failed"
Jun 22 16:16:15.786: INFO: Pod "security-context-681e070d-2149-4597-9e2e-abea059d9be3": Phase="Pending", Reason="", readiness=false. Elapsed: 48.441223ms
Jun 22 16:16:17.835: INFO: Pod "security-context-681e070d-2149-4597-9e2e-abea059d9be3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098157448s
Jun 22 16:16:19.837: INFO: Pod "security-context-681e070d-2149-4597-9e2e-abea059d9be3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099606775s
Jun 22 16:16:21.837: INFO: Pod "security-context-681e070d-2149-4597-9e2e-abea059d9be3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.099872541s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:21.837: INFO: Pod "security-context-681e070d-2149-4597-9e2e-abea059d9be3" satisfied condition "Succeeded or Failed"
Jun 22 16:16:21.885: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod security-context-681e070d-2149-4597-9e2e-abea059d9be3 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:22.002: INFO: Waiting for pod security-context-681e070d-2149-4597-9e2e-abea059d9be3 to disappear
Jun 22 16:16:22.050: INFO: Pod security-context-681e070d-2149-4597-9e2e-abea059d9be3 no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.830 seconds][0m
[sig-node] Security Context
[90mtest/e2e/node/framework.go:23[0m
should support container.SecurityContext.RunAsUser [LinuxOnly]
[90mtest/e2e/node/security_context.go:111[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":6,"skipped":27,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:22.203: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 113 lines ...
[32m• [SLOW TEST:37.645 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
deployment should support rollover [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ResourceQuota
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 26 lines ...
[32m• [SLOW TEST:17.130 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should verify ResourceQuota with best effort scope. [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:24.636: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 106 lines ...
[36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [sig-network] Networking
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:15:59.451: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename pod-network-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 88 lines ...
[90mtest/e2e/common/network/framework.go:23[0m
Granular Checks: Pods
[90mtest/e2e/common/network/networking.go:32[0m
should function for intra-pod communication: http [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":5,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:26.417: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 101 lines ...
[32m• [SLOW TEST:9.144 seconds][0m
[sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers]
[90mtest/e2e/common/node/framework.go:23[0m
will start an ephemeral container in an existing pod
[90mtest/e2e/common/node/ephemeral_containers.go:44[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] will start an ephemeral container in an existing pod","total":-1,"completed":2,"skipped":36,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Container Runtime
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 19 lines ...
[90mtest/e2e/common/node/runtime.go:43[0m
when running a container with a new image
[90mtest/e2e/common/node/runtime.go:259[0m
should be able to pull image [NodeConformance]
[90mtest/e2e/common/node/runtime.go:375[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":7,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:29.222: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 25 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with secret pod [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-secret-99fz
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 16:16:04.037: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-99fz" in namespace "subpath-8748" to be "Succeeded or Failed"
Jun 22 16:16:04.080: INFO: Pod "pod-subpath-test-secret-99fz": Phase="Pending", Reason="", readiness=false. Elapsed: 42.640689ms
Jun 22 16:16:06.124: INFO: Pod "pod-subpath-test-secret-99fz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086280132s
Jun 22 16:16:08.124: INFO: Pod "pod-subpath-test-secret-99fz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086111748s
Jun 22 16:16:10.127: INFO: Pod "pod-subpath-test-secret-99fz": Phase="Running", Reason="", readiness=true. Elapsed: 6.089601512s
Jun 22 16:16:12.127: INFO: Pod "pod-subpath-test-secret-99fz": Phase="Running", Reason="", readiness=true. Elapsed: 8.08985919s
Jun 22 16:16:14.124: INFO: Pod "pod-subpath-test-secret-99fz": Phase="Running", Reason="", readiness=true. Elapsed: 10.086052106s
... skipping 3 lines ...
Jun 22 16:16:22.131: INFO: Pod "pod-subpath-test-secret-99fz": Phase="Running", Reason="", readiness=true. Elapsed: 18.093710809s
Jun 22 16:16:24.131: INFO: Pod "pod-subpath-test-secret-99fz": Phase="Running", Reason="", readiness=true. Elapsed: 20.093974026s
Jun 22 16:16:26.142: INFO: Pod "pod-subpath-test-secret-99fz": Phase="Running", Reason="", readiness=true. Elapsed: 22.104634732s
Jun 22 16:16:28.125: INFO: Pod "pod-subpath-test-secret-99fz": Phase="Running", Reason="", readiness=true. Elapsed: 24.08709614s
Jun 22 16:16:30.125: INFO: Pod "pod-subpath-test-secret-99fz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.087493375s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:30.125: INFO: Pod "pod-subpath-test-secret-99fz" satisfied condition "Succeeded or Failed"
Jun 22 16:16:30.178: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-subpath-test-secret-99fz container test-container-subpath-secret-99fz: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:30.280: INFO: Waiting for pod pod-subpath-test-secret-99fz to disappear
Jun 22 16:16:30.326: INFO: Pod pod-subpath-test-secret-99fz no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-secret-99fz
Jun 22 16:16:30.326: INFO: Deleting pod "pod-subpath-test-secret-99fz" in namespace "subpath-8748"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with secret pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]","total":-1,"completed":4,"skipped":69,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:30.487: INFO: Only supported for providers [vsphere] (not gce)
... skipping 148 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSIStorageCapacity
[90mtest/e2e/storage/csi_mock_volume.go:1334[0m
CSIStorageCapacity unused
[90mtest/e2e/storage/csi_mock_volume.go:1377[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":1,"skipped":6,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating projection with secret that has name projected-secret-test-46a83443-1140-43cf-9a6b-23650e9295c9
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 16:16:25.199: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-95c16f46-8047-4784-9d2a-5997278f7d4b" in namespace "projected-8903" to be "Succeeded or Failed"
Jun 22 16:16:25.253: INFO: Pod "pod-projected-secrets-95c16f46-8047-4784-9d2a-5997278f7d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 53.322842ms
Jun 22 16:16:27.303: INFO: Pod "pod-projected-secrets-95c16f46-8047-4784-9d2a-5997278f7d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103400568s
Jun 22 16:16:29.300: INFO: Pod "pod-projected-secrets-95c16f46-8047-4784-9d2a-5997278f7d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100485991s
Jun 22 16:16:31.303: INFO: Pod "pod-projected-secrets-95c16f46-8047-4784-9d2a-5997278f7d4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.103778608s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:31.303: INFO: Pod "pod-projected-secrets-95c16f46-8047-4784-9d2a-5997278f7d4b" satisfied condition "Succeeded or Failed"
Jun 22 16:16:31.355: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-projected-secrets-95c16f46-8047-4784-9d2a-5997278f7d4b container projected-secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:31.474: INFO: Waiting for pod pod-projected-secrets-95c16f46-8047-4784-9d2a-5997278f7d4b to disappear
Jun 22 16:16:31.521: INFO: Pod pod-projected-secrets-95c16f46-8047-4784-9d2a-5997278f7d4b no longer exists
[AfterEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.860 seconds][0m
[sig-storage] Projected secret
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:31.650: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 47 lines ...
Jun 22 16:16:19.621: INFO: PersistentVolumeClaim pvc-zsqzq found but phase is Pending instead of Bound.
Jun 22 16:16:21.669: INFO: PersistentVolumeClaim pvc-zsqzq found and phase=Bound (4.140480735s)
Jun 22 16:16:21.669: INFO: Waiting up to 3m0s for PersistentVolume local-4nlw6 to have phase Bound
Jun 22 16:16:21.714: INFO: PersistentVolume local-4nlw6 found and phase=Bound (45.051691ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-q9q9
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:16:21.853: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q9q9" in namespace "provisioning-3203" to be "Succeeded or Failed"
Jun 22 16:16:21.899: INFO: Pod "pod-subpath-test-preprovisionedpv-q9q9": Phase="Pending", Reason="", readiness=false. Elapsed: 46.469466ms
Jun 22 16:16:23.947: INFO: Pod "pod-subpath-test-preprovisionedpv-q9q9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094398877s
Jun 22 16:16:25.947: INFO: Pod "pod-subpath-test-preprovisionedpv-q9q9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094539978s
Jun 22 16:16:27.950: INFO: Pod "pod-subpath-test-preprovisionedpv-q9q9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097131455s
Jun 22 16:16:29.947: INFO: Pod "pod-subpath-test-preprovisionedpv-q9q9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094156372s
Jun 22 16:16:31.948: INFO: Pod "pod-subpath-test-preprovisionedpv-q9q9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095160635s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:31.948: INFO: Pod "pod-subpath-test-preprovisionedpv-q9q9" satisfied condition "Succeeded or Failed"
Jun 22 16:16:31.995: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-subpath-test-preprovisionedpv-q9q9 container test-container-volume-preprovisionedpv-q9q9: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:32.112: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q9q9 to disappear
Jun 22 16:16:32.157: INFO: Pod pod-subpath-test-preprovisionedpv-q9q9 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-q9q9
Jun 22 16:16:32.157: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q9q9" in namespace "provisioning-3203"
... skipping 26 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":7,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:33.287: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
... skipping 126 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:16:34.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "configmap-6450" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":3,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:34.151: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 66 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should run with an explicit non-root user ID [LinuxOnly]
test/e2e/common/node/security_context.go:131
Jun 22 16:16:29.678: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-153" to be "Succeeded or Failed"
Jun 22 16:16:29.735: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 57.293686ms
Jun 22 16:16:31.786: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1077798s
Jun 22 16:16:33.785: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107297905s
Jun 22 16:16:35.784: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105940558s
Jun 22 16:16:37.786: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.1080021s
Jun 22 16:16:37.786: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 22 16:16:37.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-153" for this suite.
... skipping 2 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
When creating a container with runAsNonRoot
[90mtest/e2e/common/node/security_context.go:106[0m
should run with an explicit non-root user ID [LinuxOnly]
[90mtest/e2e/common/node/security_context.go:131[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":8,"skipped":47,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:37.978: INFO: Only supported for providers [vsphere] (not gce)
... skipping 77 lines ...
Jun 22 16:16:13.981: INFO: Unable to read jessie_udp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:14.029: INFO: Unable to read jessie_tcp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:14.078: INFO: Unable to read jessie_udp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:14.126: INFO: Unable to read jessie_tcp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:14.174: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:14.222: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:14.418: INFO: Lookups using dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5264 wheezy_tcp@dns-test-service.dns-5264 wheezy_udp@dns-test-service.dns-5264.svc wheezy_tcp@dns-test-service.dns-5264.svc wheezy_udp@_http._tcp.dns-test-service.dns-5264.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5264.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5264 jessie_tcp@dns-test-service.dns-5264 jessie_udp@dns-test-service.dns-5264.svc jessie_tcp@dns-test-service.dns-5264.svc jessie_udp@_http._tcp.dns-test-service.dns-5264.svc jessie_tcp@_http._tcp.dns-test-service.dns-5264.svc]
Jun 22 16:16:19.473: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:19.524: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:19.571: INFO: Unable to read wheezy_udp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:19.624: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:19.671: INFO: Unable to read wheezy_udp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
... skipping 5 lines ...
Jun 22 16:16:20.172: INFO: Unable to read jessie_udp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:20.219: INFO: Unable to read jessie_tcp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:20.267: INFO: Unable to read jessie_udp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:20.318: INFO: Unable to read jessie_tcp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:20.366: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:20.414: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:20.603: INFO: Lookups using dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5264 wheezy_tcp@dns-test-service.dns-5264 wheezy_udp@dns-test-service.dns-5264.svc wheezy_tcp@dns-test-service.dns-5264.svc wheezy_udp@_http._tcp.dns-test-service.dns-5264.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5264.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5264 jessie_tcp@dns-test-service.dns-5264 jessie_udp@dns-test-service.dns-5264.svc jessie_tcp@dns-test-service.dns-5264.svc jessie_udp@_http._tcp.dns-test-service.dns-5264.svc jessie_tcp@_http._tcp.dns-test-service.dns-5264.svc]
Jun 22 16:16:24.468: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:24.516: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:24.562: INFO: Unable to read wheezy_udp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:24.610: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:24.657: INFO: Unable to read wheezy_udp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
... skipping 5 lines ...
Jun 22 16:16:25.141: INFO: Unable to read jessie_udp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:25.187: INFO: Unable to read jessie_tcp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:25.236: INFO: Unable to read jessie_udp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:25.285: INFO: Unable to read jessie_tcp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:25.332: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:25.380: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:25.575: INFO: Lookups using dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5264 wheezy_tcp@dns-test-service.dns-5264 wheezy_udp@dns-test-service.dns-5264.svc wheezy_tcp@dns-test-service.dns-5264.svc wheezy_udp@_http._tcp.dns-test-service.dns-5264.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5264.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5264 jessie_tcp@dns-test-service.dns-5264 jessie_udp@dns-test-service.dns-5264.svc jessie_tcp@dns-test-service.dns-5264.svc jessie_udp@_http._tcp.dns-test-service.dns-5264.svc jessie_tcp@_http._tcp.dns-test-service.dns-5264.svc]
Jun 22 16:16:29.469: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:29.518: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:29.568: INFO: Unable to read wheezy_udp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:29.627: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:29.680: INFO: Unable to read wheezy_udp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
... skipping 5 lines ...
Jun 22 16:16:30.190: INFO: Unable to read jessie_udp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:30.238: INFO: Unable to read jessie_tcp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:30.292: INFO: Unable to read jessie_udp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:30.340: INFO: Unable to read jessie_tcp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:30.387: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:30.438: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:30.633: INFO: Lookups using dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5264 wheezy_tcp@dns-test-service.dns-5264 wheezy_udp@dns-test-service.dns-5264.svc wheezy_tcp@dns-test-service.dns-5264.svc wheezy_udp@_http._tcp.dns-test-service.dns-5264.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5264.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5264 jessie_tcp@dns-test-service.dns-5264 jessie_udp@dns-test-service.dns-5264.svc jessie_tcp@dns-test-service.dns-5264.svc jessie_udp@_http._tcp.dns-test-service.dns-5264.svc jessie_tcp@_http._tcp.dns-test-service.dns-5264.svc]
Jun 22 16:16:34.466: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:34.514: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:34.577: INFO: Unable to read wheezy_udp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:34.625: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:34.673: INFO: Unable to read wheezy_udp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:34.720: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:34.772: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:34.821: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5264.svc from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:35.060: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:35.108: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:35.155: INFO: Unable to read jessie_udp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:35.206: INFO: Unable to read jessie_tcp@dns-test-service.dns-5264 from pod dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623: the server could not find the requested resource (get pods dns-test-154a7778-3984-4ff3-9f20-155299f87623)
Jun 22 16:16:35.637: INFO: Lookups using dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5264 wheezy_tcp@dns-test-service.dns-5264 wheezy_udp@dns-test-service.dns-5264.svc wheezy_tcp@dns-test-service.dns-5264.svc wheezy_udp@_http._tcp.dns-test-service.dns-5264.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5264.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5264 jessie_tcp@dns-test-service.dns-5264]
Jun 22 16:16:40.615: INFO: DNS probes using dns-5264/dns-test-154a7778-3984-4ff3-9f20-155299f87623 succeeded
[1mSTEP[0m: deleting the pod
[1mSTEP[0m: deleting the test service
[1mSTEP[0m: deleting the test headless service
... skipping 6 lines ...
[32m• [SLOW TEST:38.284 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] DNS
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 33 lines ...
[32m• [SLOW TEST:14.868 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should provide DNS for the cluster [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:41.359: INFO: Only supported for providers [azure] (not gce)
... skipping 88 lines ...
[32m• [SLOW TEST:70.403 seconds][0m
[sig-apps] CronJob
[90mtest/e2e/apps/framework.go:23[0m
should remove from active list jobs that have been deleted
[90mtest/e2e/apps/cronjob.go:241[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:43.986: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 102 lines ...
[32m• [SLOW TEST:46.767 seconds][0m
[sig-apps] Job
[90mtest/e2e/apps/framework.go:23[0m
should not create pods when created in suspend state
[90mtest/e2e/apps/job.go:103[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should not create pods when created in suspend state","total":-1,"completed":4,"skipped":41,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:44.440: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 75 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl validation
[90mtest/e2e/kubectl/kubectl.go:1033[0m
should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
[90mtest/e2e/kubectl/kubectl.go:1078[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":2,"skipped":7,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:44.842: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:16:45.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-1472" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:45.338: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 79 lines ...
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:15:44.689: INFO: >>> kubeConfig: /root/.kube/config
... skipping 58 lines ...
Jun 22 16:15:49.649: INFO: PersistentVolumeClaim csi-hostpathcsb84 found but phase is Pending instead of Bound.
Jun 22 16:15:51.696: INFO: PersistentVolumeClaim csi-hostpathcsb84 found but phase is Pending instead of Bound.
Jun 22 16:15:53.743: INFO: PersistentVolumeClaim csi-hostpathcsb84 found but phase is Pending instead of Bound.
Jun 22 16:15:55.796: INFO: PersistentVolumeClaim csi-hostpathcsb84 found and phase=Bound (8.241623086s)
[1mSTEP[0m: Expanding non-expandable pvc
Jun 22 16:15:55.888: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI}
Jun 22 16:15:55.984: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:15:58.080: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:00.081: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:02.079: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:04.080: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:06.081: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:08.080: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:10.095: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:12.079: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:14.080: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:16.082: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:18.082: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:20.079: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:22.087: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:24.081: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:26.088: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 22 16:16:26.194: INFO: Error updating pvc csi-hostpathcsb84: persistentvolumeclaims "csi-hostpathcsb84" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
[1mSTEP[0m: Deleting pvc
Jun 22 16:16:26.194: INFO: Deleting PersistentVolumeClaim "csi-hostpathcsb84"
Jun 22 16:16:26.253: INFO: Waiting up to 5m0s for PersistentVolume pvc-ea273412-8b96-4d72-875e-d7d8c4afe173 to get deleted
Jun 22 16:16:26.302: INFO: PersistentVolume pvc-ea273412-8b96-4d72-875e-d7d8c4afe173 found and phase=Released (48.519941ms)
Jun 22 16:16:31.351: INFO: PersistentVolume pvc-ea273412-8b96-4d72-875e-d7d8c4afe173 was removed
[1mSTEP[0m: Deleting sc
... skipping 53 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (block volmode)] volume-expand
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not allow expansion of pvcs without AllowVolumeExpansion property
[90mtest/e2e/storage/testsuites/volume_expand.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":2,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] PodTemplates
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 16 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:16:46.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "podtemplate-645" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":3,"skipped":3,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:46.456: INFO: Driver local doesn't support ext4 -- skipping
... skipping 196 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when scheduling a busybox command in a pod
[90mtest/e2e/common/node/kubelet.go:43[0m
should print the output to logs [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:47.611: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 127 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:16:49.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "configmap-6990" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":44,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:49.311: INFO: Only supported for providers [openstack] (not gce)
... skipping 28 lines ...
[sig-storage] CSI Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: csi-hostpath]
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver "csi-hostpath" does not support topology - skipping[0m
test/e2e/storage/testsuites/topology.go:93
[90m------------------------------[0m
... skipping 32 lines ...
[BeforeEach] Pod Container lifecycle
test/e2e/node/pods.go:228
[It] should not create extra sandbox if all containers are done
test/e2e/node/pods.go:232
[1mSTEP[0m: creating the pod that should always exit 0
[1mSTEP[0m: submitting the pod to kubernetes
Jun 22 16:16:41.763: INFO: Waiting up to 5m0s for pod "pod-always-succeed5c956693-5b07-4510-86da-ab4768220cc0" in namespace "pods-1632" to be "Succeeded or Failed"
Jun 22 16:16:41.809: INFO: Pod "pod-always-succeed5c956693-5b07-4510-86da-ab4768220cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 46.038553ms
Jun 22 16:16:43.854: INFO: Pod "pod-always-succeed5c956693-5b07-4510-86da-ab4768220cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091093849s
Jun 22 16:16:45.854: INFO: Pod "pod-always-succeed5c956693-5b07-4510-86da-ab4768220cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090442772s
Jun 22 16:16:47.854: INFO: Pod "pod-always-succeed5c956693-5b07-4510-86da-ab4768220cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090370562s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:47.854: INFO: Pod "pod-always-succeed5c956693-5b07-4510-86da-ab4768220cc0" satisfied condition "Succeeded or Failed"
[1mSTEP[0m: Getting events about the pod
[1mSTEP[0m: Checking events about the pod
[1mSTEP[0m: deleting the pod
[AfterEach] [sig-node] Pods Extended
test/e2e/framework/framework.go:187
Jun 22 16:16:49.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
[90mtest/e2e/node/framework.go:23[0m
Pod Container lifecycle
[90mtest/e2e/node/pods.go:226[0m
should not create extra sandbox if all containers are done
[90mtest/e2e/node/pods.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":5,"skipped":27,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Jun 22 16:16:34.682: INFO: PersistentVolumeClaim pvc-7vh2q found but phase is Pending instead of Bound.
Jun 22 16:16:36.728: INFO: PersistentVolumeClaim pvc-7vh2q found and phase=Bound (2.091869648s)
Jun 22 16:16:36.728: INFO: Waiting up to 3m0s for PersistentVolume local-rvqkw to have phase Bound
Jun 22 16:16:36.777: INFO: PersistentVolume local-rvqkw found and phase=Bound (49.08403ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-xvwz
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:16:36.931: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xvwz" in namespace "provisioning-752" to be "Succeeded or Failed"
Jun 22 16:16:36.979: INFO: Pod "pod-subpath-test-preprovisionedpv-xvwz": Phase="Pending", Reason="", readiness=false. Elapsed: 48.184075ms
Jun 22 16:16:39.028: INFO: Pod "pod-subpath-test-preprovisionedpv-xvwz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09787101s
Jun 22 16:16:41.027: INFO: Pod "pod-subpath-test-preprovisionedpv-xvwz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096604253s
Jun 22 16:16:43.029: INFO: Pod "pod-subpath-test-preprovisionedpv-xvwz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098068579s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:43.029: INFO: Pod "pod-subpath-test-preprovisionedpv-xvwz" satisfied condition "Succeeded or Failed"
Jun 22 16:16:43.103: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-subpath-test-preprovisionedpv-xvwz container test-container-subpath-preprovisionedpv-xvwz: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:43.220: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xvwz to disappear
Jun 22 16:16:43.267: INFO: Pod pod-subpath-test-preprovisionedpv-xvwz no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-xvwz
Jun 22 16:16:43.267: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xvwz" in namespace "provisioning-752"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-xvwz
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:16:43.363: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xvwz" in namespace "provisioning-752" to be "Succeeded or Failed"
Jun 22 16:16:43.409: INFO: Pod "pod-subpath-test-preprovisionedpv-xvwz": Phase="Pending", Reason="", readiness=false. Elapsed: 45.819298ms
Jun 22 16:16:45.462: INFO: Pod "pod-subpath-test-preprovisionedpv-xvwz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098583513s
Jun 22 16:16:47.458: INFO: Pod "pod-subpath-test-preprovisionedpv-xvwz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094388109s
Jun 22 16:16:49.461: INFO: Pod "pod-subpath-test-preprovisionedpv-xvwz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097755095s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:49.461: INFO: Pod "pod-subpath-test-preprovisionedpv-xvwz" satisfied condition "Succeeded or Failed"
Jun 22 16:16:49.522: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-subpath-test-preprovisionedpv-xvwz container test-container-subpath-preprovisionedpv-xvwz: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:49.628: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xvwz to disappear
Jun 22 16:16:49.673: INFO: Pod pod-subpath-test-preprovisionedpv-xvwz no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-xvwz
Jun 22 16:16:49.673: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xvwz" in namespace "provisioning-752"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":45,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 146 lines ...
[90mtest/e2e/common/network/framework.go:23[0m
Granular Checks: Pods
[90mtest/e2e/common/network/networking.go:32[0m
should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:51.035: INFO: Only supported for providers [azure] (not gce)
... skipping 44 lines ...
Jun 22 16:16:20.850: INFO: PersistentVolumeClaim pvc-bv9d2 found but phase is Pending instead of Bound.
Jun 22 16:16:22.895: INFO: PersistentVolumeClaim pvc-bv9d2 found and phase=Bound (4.131710896s)
Jun 22 16:16:22.895: INFO: Waiting up to 3m0s for PersistentVolume local-wgqmz to have phase Bound
Jun 22 16:16:22.939: INFO: PersistentVolume local-wgqmz found and phase=Bound (43.495774ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-tggd
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 16:16:23.078: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tggd" in namespace "provisioning-3186" to be "Succeeded or Failed"
Jun 22 16:16:23.122: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd": Phase="Pending", Reason="", readiness=false. Elapsed: 43.936677ms
Jun 22 16:16:25.167: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089195814s
Jun 22 16:16:27.172: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093924511s
Jun 22 16:16:29.167: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088927152s
Jun 22 16:16:31.169: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090632412s
Jun 22 16:16:33.172: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd": Phase="Running", Reason="", readiness=true. Elapsed: 10.093244198s
... skipping 4 lines ...
Jun 22 16:16:43.168: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd": Phase="Running", Reason="", readiness=true. Elapsed: 20.089875972s
Jun 22 16:16:45.169: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd": Phase="Running", Reason="", readiness=true. Elapsed: 22.091131301s
Jun 22 16:16:47.172: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd": Phase="Running", Reason="", readiness=true. Elapsed: 24.094119678s
Jun 22 16:16:49.172: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd": Phase="Running", Reason="", readiness=true. Elapsed: 26.093716393s
Jun 22 16:16:51.197: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.118702338s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:51.197: INFO: Pod "pod-subpath-test-preprovisionedpv-tggd" satisfied condition "Succeeded or Failed"
Jun 22 16:16:51.241: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-subpath-test-preprovisionedpv-tggd container test-container-subpath-preprovisionedpv-tggd: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:51.341: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tggd to disappear
Jun 22 16:16:51.385: INFO: Pod pod-subpath-test-preprovisionedpv-tggd no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-tggd
Jun 22 16:16:51.385: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tggd" in namespace "provisioning-3186"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":50,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:52.146: INFO: Only supported for providers [openstack] (not gce)
... skipping 80 lines ...
Jun 22 16:16:16.091: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 16:16:18.134: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087554016s
Jun 22 16:16:18.134: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 16:16:20.136: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.090178266s
Jun 22 16:16:20.136: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 22 16:16:20.136: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 22 16:16:20.136: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=services-1794 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.189.169:80 && echo service-down-failed'
Jun 22 16:16:22.679: INFO: rc: 28
Jun 22 16:16:22.679: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.69.189.169:80 && echo service-down-failed" in pod services-1794/verify-service-down-host-exec-pod: error running /logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=services-1794 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.189.169:80 && echo service-down-failed:
Command stdout:
stderr:
+ curl -g -s --connect-timeout 2 http://100.69.189.169:80
command terminated with exit code 28
error:
exit status 28
Output:
[1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-1794
[1mSTEP[0m: adding service.kubernetes.io/headless label
[1mSTEP[0m: verifying service is not up
Jun 22 16:16:22.823: INFO: Creating new host exec pod
... skipping 2 lines ...
Jun 22 16:16:22.911: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 16:16:24.954: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086133057s
Jun 22 16:16:24.954: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 16:16:26.956: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.087840895s
Jun 22 16:16:26.956: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 22 16:16:26.956: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 22 16:16:26.956: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=services-1794 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.67.119.37:80 && echo service-down-failed'
Jun 22 16:16:29.555: INFO: rc: 28
Jun 22 16:16:29.556: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.67.119.37:80 && echo service-down-failed" in pod services-1794/verify-service-down-host-exec-pod: error running /logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=services-1794 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.67.119.37:80 && echo service-down-failed:
Command stdout:
stderr:
+ curl -g -s --connect-timeout 2 http://100.67.119.37:80
command terminated with exit code 28
error:
exit status 28
Output:
[1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-1794
[1mSTEP[0m: removing service.kubernetes.io/headless annotation
[1mSTEP[0m: verifying service is up
Jun 22 16:16:29.724: INFO: Creating new host exec pod
... skipping 34 lines ...
Jun 22 16:16:45.759: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 16:16:47.802: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088382186s
Jun 22 16:16:47.802: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 22 16:16:49.804: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.090659598s
Jun 22 16:16:49.804: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 22 16:16:49.804: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 22 16:16:49.804: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=services-1794 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.189.169:80 && echo service-down-failed'
Jun 22 16:16:52.332: INFO: rc: 28
Jun 22 16:16:52.332: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.69.189.169:80 && echo service-down-failed" in pod services-1794/verify-service-down-host-exec-pod: error running /logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=services-1794 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.189.169:80 && echo service-down-failed:
Command stdout:
stderr:
+ curl -g -s --connect-timeout 2 http://100.69.189.169:80
command terminated with exit code 28
error:
exit status 28
Output:
[1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-1794
[AfterEach] [sig-network] Services
test/e2e/framework/framework.go:187
Jun 22 16:16:52.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
[32m• [SLOW TEST:79.557 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should implement service.kubernetes.io/headless
[90mtest/e2e/network/service.go:2207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":1,"skipped":3,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:52.545: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: gluster]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m
test/e2e/storage/drivers/in_tree.go:263
[90m------------------------------[0m
... skipping 50 lines ...
[1mSTEP[0m: Building a namespace api object, basename svcaccounts
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/auth/service_accounts.go:333
[1mSTEP[0m: Creating a pod to test service account token:
Jun 22 16:16:29.342: INFO: Waiting up to 5m0s for pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300" in namespace "svcaccounts-552" to be "Succeeded or Failed"
Jun 22 16:16:29.390: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Pending", Reason="", readiness=false. Elapsed: 47.660045ms
Jun 22 16:16:31.443: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100289557s
Jun 22 16:16:33.436: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094097015s
Jun 22 16:16:35.438: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095541758s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:35.438: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300" satisfied condition "Succeeded or Failed"
Jun 22 16:16:35.484: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:35.609: INFO: Waiting for pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 to disappear
Jun 22 16:16:35.654: INFO: Pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 no longer exists
[1mSTEP[0m: Creating a pod to test service account token:
Jun 22 16:16:35.702: INFO: Waiting up to 5m0s for pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300" in namespace "svcaccounts-552" to be "Succeeded or Failed"
Jun 22 16:16:35.752: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Pending", Reason="", readiness=false. Elapsed: 50.363851ms
Jun 22 16:16:37.798: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Running", Reason="", readiness=true. Elapsed: 2.0964788s
Jun 22 16:16:39.802: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Running", Reason="", readiness=false. Elapsed: 4.100283378s
Jun 22 16:16:41.808: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105874813s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:41.808: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300" satisfied condition "Succeeded or Failed"
Jun 22 16:16:41.857: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:41.966: INFO: Waiting for pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 to disappear
Jun 22 16:16:42.018: INFO: Pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 no longer exists
[1mSTEP[0m: Creating a pod to test service account token:
Jun 22 16:16:42.066: INFO: Waiting up to 5m0s for pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300" in namespace "svcaccounts-552" to be "Succeeded or Failed"
Jun 22 16:16:42.123: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Pending", Reason="", readiness=false. Elapsed: 57.144158ms
Jun 22 16:16:44.176: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109975513s
Jun 22 16:16:46.175: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109204406s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:46.175: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300" satisfied condition "Succeeded or Failed"
Jun 22 16:16:46.222: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:46.327: INFO: Waiting for pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 to disappear
Jun 22 16:16:46.379: INFO: Pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 no longer exists
[1mSTEP[0m: Creating a pod to test service account token:
Jun 22 16:16:46.431: INFO: Waiting up to 5m0s for pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300" in namespace "svcaccounts-552" to be "Succeeded or Failed"
Jun 22 16:16:46.484: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Pending", Reason="", readiness=false. Elapsed: 53.143653ms
Jun 22 16:16:48.530: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099614146s
Jun 22 16:16:50.531: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Running", Reason="", readiness=true. Elapsed: 4.100480081s
Jun 22 16:16:52.532: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100915987s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:52.532: INFO: Pod "test-pod-c0da5232-e431-4c0b-847f-41614944b300" satisfied condition "Succeeded or Failed"
Jun 22 16:16:52.583: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:52.698: INFO: Waiting for pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 to disappear
Jun 22 16:16:52.745: INFO: Pod test-pod-c0da5232-e431-4c0b-847f-41614944b300 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:23.891 seconds][0m
[sig-auth] ServiceAccounts
[90mtest/e2e/auth/framework.go:23[0m
should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/auth/service_accounts.go:333[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":39,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:52.879: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-projected-all-test-volume-4fe782fc-fc1f-41b4-b91a-ce0bd5830f13
[1mSTEP[0m: Creating secret with name secret-projected-all-test-volume-d5d093a0-b86c-4426-90cf-4d3a62a172d3
[1mSTEP[0m: Creating a pod to test Check all projections for projected volume plugin
Jun 22 16:16:47.749: INFO: Waiting up to 5m0s for pod "projected-volume-63e37087-5052-4cfe-b425-fe355ddf640b" in namespace "projected-7079" to be "Succeeded or Failed"
Jun 22 16:16:47.796: INFO: Pod "projected-volume-63e37087-5052-4cfe-b425-fe355ddf640b": Phase="Pending", Reason="", readiness=false. Elapsed: 47.568918ms
Jun 22 16:16:49.845: INFO: Pod "projected-volume-63e37087-5052-4cfe-b425-fe355ddf640b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096380433s
Jun 22 16:16:51.845: INFO: Pod "projected-volume-63e37087-5052-4cfe-b425-fe355ddf640b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096434213s
Jun 22 16:16:53.845: INFO: Pod "projected-volume-63e37087-5052-4cfe-b425-fe355ddf640b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09614667s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:53.845: INFO: Pod "projected-volume-63e37087-5052-4cfe-b425-fe355ddf640b" satisfied condition "Succeeded or Failed"
Jun 22 16:16:53.892: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod projected-volume-63e37087-5052-4cfe-b425-fe355ddf640b container projected-all-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:54.009: INFO: Waiting for pod projected-volume-63e37087-5052-4cfe-b425-fe355ddf640b to disappear
Jun 22 16:16:54.056: INFO: Pod projected-volume-63e37087-5052-4cfe-b425-fe355ddf640b no longer exists
[AfterEach] [sig-storage] Projected combined
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.899 seconds][0m
[sig-storage] Projected combined
[90mtest/e2e/common/storage/framework.go:23[0m
should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":24,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:54.200: INFO: Only supported for providers [aws] (not gce)
... skipping 209 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should create read-only inline ephemeral volume
[90mtest/e2e/storage/testsuites/ephemeral.go:175[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:55.550: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 36 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:16:55.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "metrics-grabber-7985" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":5,"skipped":33,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:55.854: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 74 lines ...
[32m• [SLOW TEST:83.241 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
updates should be reflected in volume [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 6 lines ...
[It] should support existing single file [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:221
Jun 22 16:16:50.720: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 22 16:16:50.720: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-5fw7
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:16:50.799: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-5fw7" in namespace "provisioning-7839" to be "Succeeded or Failed"
Jun 22 16:16:50.867: INFO: Pod "pod-subpath-test-inlinevolume-5fw7": Phase="Pending", Reason="", readiness=false. Elapsed: 67.399157ms
Jun 22 16:16:52.915: INFO: Pod "pod-subpath-test-inlinevolume-5fw7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115250313s
Jun 22 16:16:54.927: INFO: Pod "pod-subpath-test-inlinevolume-5fw7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127836666s
Jun 22 16:16:56.928: INFO: Pod "pod-subpath-test-inlinevolume-5fw7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128643036s
[1mSTEP[0m: Saw pod success
Jun 22 16:16:56.928: INFO: Pod "pod-subpath-test-inlinevolume-5fw7" satisfied condition "Succeeded or Failed"
Jun 22 16:16:56.982: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-subpath-test-inlinevolume-5fw7 container test-container-subpath-inlinevolume-5fw7: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:16:57.136: INFO: Waiting for pod pod-subpath-test-inlinevolume-5fw7 to disappear
Jun 22 16:16:57.191: INFO: Pod pod-subpath-test-inlinevolume-5fw7 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-5fw7
Jun 22 16:16:57.191: INFO: Deleting pod "pod-subpath-test-inlinevolume-5fw7" in namespace "provisioning-7839"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":46,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:57.453: INFO: Driver local doesn't support ext4 -- skipping
... skipping 134 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should verify that all csinodes have volume limits
[90mtest/e2e/storage/testsuites/volumelimits.go:249[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits","total":-1,"completed":3,"skipped":31,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:16:59.874: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 87 lines ...
Jun 22 16:16:54.867: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5970 explain e2e-test-crd-publish-openapi-1338-crds.spec'
Jun 22 16:16:55.168: INFO: stderr: ""
Jun 22 16:16:55.168: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1338-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n"
Jun 22 16:16:55.168: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5970 explain e2e-test-crd-publish-openapi-1338-crds.spec.bars'
Jun 22 16:16:55.479: INFO: stderr: ""
Jun 22 16:16:55.479: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1338-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n"
[1mSTEP[0m: kubectl explain works to return error when explain is called on property that doesn't exist
Jun 22 16:16:55.479: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5970 explain e2e-test-crd-publish-openapi-1338-crds.spec.bars2'
Jun 22 16:16:55.793: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:187
Jun 22 16:16:59.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "crd-publish-openapi-5970" for this suite.
... skipping 2 lines ...
[32m• [SLOW TEST:14.628 seconds][0m
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
works for CRD with validation schema [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] CSI mock volume
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 124 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:17:00.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "request-timeout-6103" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":7,"skipped":23,"failed":0}
[BeforeEach] [sig-node] Kubelet
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:17:00.049: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubelet-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:17:00.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubelet-test-3786" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":5,"skipped":27,"failed":0}
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:00.543: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 131 lines ...
Jun 22 16:17:00.619: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename pod-os-rejection
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should reject pod when the node OS doesn't match pod's OS
test/e2e/common/node/pod_admission.go:38
Jun 22 16:17:01.040: INFO: Waiting up to 2m0s for pod "wrong-pod-os" in namespace "pod-os-rejection-8032" to be "failed with reason PodOSNotSupported"
Jun 22 16:17:01.083: INFO: Pod "wrong-pod-os": Phase="Failed", Reason="PodOSNotSupported", readiness=false. Elapsed: 42.904766ms
Jun 22 16:17:01.083: INFO: Pod "wrong-pod-os" satisfied condition "failed with reason PodOSNotSupported"
[AfterEach] [sig-node] PodOSRejection [NodeConformance]
test/e2e/framework/framework.go:187
Jun 22 16:17:01.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "pod-os-rejection-8032" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS","total":-1,"completed":9,"skipped":34,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Services
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 38 lines ...
[32m• [SLOW TEST:14.794 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should be able to change the type from ExternalName to ClusterIP [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:02.502: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 42 lines ...
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:17:02.525: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename configmap
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap that has name configmap-test-emptyKey-610e77eb-0d5c-42ee-a7f6-0f7a2635d5ac
[AfterEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:187
Jun 22 16:17:02.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "configmap-6316" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:03.018: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 71 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:17:03.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "svcaccounts-8669" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":6,"skipped":54,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:04.084: INFO: Only supported for providers [openstack] (not gce)
... skipping 43 lines ...
[1mSTEP[0m: Building a namespace api object, basename var-expansion
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test substitution in container's command
Jun 22 16:16:57.999: INFO: Waiting up to 5m0s for pod "var-expansion-6690cf59-3b5a-499f-8fce-72161f8601e3" in namespace "var-expansion-5959" to be "Succeeded or Failed"
Jun 22 16:16:58.060: INFO: Pod "var-expansion-6690cf59-3b5a-499f-8fce-72161f8601e3": Phase="Pending", Reason="", readiness=false. Elapsed: 60.184104ms
Jun 22 16:17:00.112: INFO: Pod "var-expansion-6690cf59-3b5a-499f-8fce-72161f8601e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112981428s
Jun 22 16:17:02.106: INFO: Pod "var-expansion-6690cf59-3b5a-499f-8fce-72161f8601e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106543377s
Jun 22 16:17:04.107: INFO: Pod "var-expansion-6690cf59-3b5a-499f-8fce-72161f8601e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107204361s
Jun 22 16:17:06.107: INFO: Pod "var-expansion-6690cf59-3b5a-499f-8fce-72161f8601e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.107139089s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:06.107: INFO: Pod "var-expansion-6690cf59-3b5a-499f-8fce-72161f8601e3" satisfied condition "Succeeded or Failed"
Jun 22 16:17:06.153: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod var-expansion-6690cf59-3b5a-499f-8fce-72161f8601e3 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:06.256: INFO: Waiting for pod var-expansion-6690cf59-3b5a-499f-8fce-72161f8601e3 to disappear
Jun 22 16:17:06.302: INFO: Pod var-expansion-6690cf59-3b5a-499f-8fce-72161f8601e3 no longer exists
[AfterEach] [sig-node] Variable Expansion
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.940 seconds][0m
[sig-node] Variable Expansion
[90mtest/e2e/common/node/framework.go:23[0m
should allow substituting values in a container's command [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":50,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:06.424: INFO: Driver "local" does not provide raw block - skipping
... skipping 116 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide podname only [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 22 16:17:00.291: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5525a92-cba5-409f-9f80-9dbce859c85b" in namespace "projected-4431" to be "Succeeded or Failed"
Jun 22 16:17:00.339: INFO: Pod "downwardapi-volume-d5525a92-cba5-409f-9f80-9dbce859c85b": Phase="Pending", Reason="", readiness=false. Elapsed: 47.645343ms
Jun 22 16:17:02.385: INFO: Pod "downwardapi-volume-d5525a92-cba5-409f-9f80-9dbce859c85b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093705444s
Jun 22 16:17:04.388: INFO: Pod "downwardapi-volume-d5525a92-cba5-409f-9f80-9dbce859c85b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096717814s
Jun 22 16:17:06.387: INFO: Pod "downwardapi-volume-d5525a92-cba5-409f-9f80-9dbce859c85b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096283638s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:06.387: INFO: Pod "downwardapi-volume-d5525a92-cba5-409f-9f80-9dbce859c85b" satisfied condition "Succeeded or Failed"
Jun 22 16:17:06.435: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod downwardapi-volume-d5525a92-cba5-409f-9f80-9dbce859c85b container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:06.539: INFO: Waiting for pod downwardapi-volume-d5525a92-cba5-409f-9f80-9dbce859c85b to disappear
Jun 22 16:17:06.592: INFO: Pod downwardapi-volume-d5525a92-cba5-409f-9f80-9dbce859c85b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.799 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should provide podname only [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] ReplicaSet
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 26 lines ...
[32m• [SLOW TEST:15.719 seconds][0m
[sig-apps] ReplicaSet
[90mtest/e2e/apps/framework.go:23[0m
should serve a basic image on each replica with a private image
[90mtest/e2e/apps/replica_set.go:115[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a private image","total":-1,"completed":2,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:08.746: INFO: Driver hostPath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 89 lines ...
Jun 22 16:16:35.558: INFO: PersistentVolumeClaim pvc-tcgn5 found but phase is Pending instead of Bound.
Jun 22 16:16:37.601: INFO: PersistentVolumeClaim pvc-tcgn5 found and phase=Bound (2.086030374s)
Jun 22 16:16:37.602: INFO: Waiting up to 3m0s for PersistentVolume local-gw7bt to have phase Bound
Jun 22 16:16:37.646: INFO: PersistentVolume local-gw7bt found and phase=Bound (44.042422ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-tp7h
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 16:16:37.777: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tp7h" in namespace "provisioning-6246" to be "Succeeded or Failed"
Jun 22 16:16:37.825: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h": Phase="Pending", Reason="", readiness=false. Elapsed: 47.871051ms
Jun 22 16:16:39.871: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09369812s
Jun 22 16:16:41.871: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h": Phase="Running", Reason="", readiness=true. Elapsed: 4.093564329s
Jun 22 16:16:43.871: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h": Phase="Running", Reason="", readiness=true. Elapsed: 6.093821787s
Jun 22 16:16:45.868: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h": Phase="Running", Reason="", readiness=true. Elapsed: 8.091075952s
Jun 22 16:16:47.869: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h": Phase="Running", Reason="", readiness=true. Elapsed: 10.091751144s
... skipping 5 lines ...
Jun 22 16:16:59.883: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h": Phase="Running", Reason="", readiness=true. Elapsed: 22.105578449s
Jun 22 16:17:01.872: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h": Phase="Running", Reason="", readiness=true. Elapsed: 24.095024942s
Jun 22 16:17:03.871: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h": Phase="Running", Reason="", readiness=false. Elapsed: 26.093642821s
Jun 22 16:17:05.875: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h": Phase="Running", Reason="", readiness=false. Elapsed: 28.097684919s
Jun 22 16:17:07.869: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.09207222s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:07.869: INFO: Pod "pod-subpath-test-preprovisionedpv-tp7h" satisfied condition "Succeeded or Failed"
Jun 22 16:17:07.923: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod pod-subpath-test-preprovisionedpv-tp7h container test-container-subpath-preprovisionedpv-tp7h: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:08.050: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tp7h to disappear
Jun 22 16:17:08.093: INFO: Pod pod-subpath-test-preprovisionedpv-tp7h no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-tp7h
Jun 22 16:17:08.093: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tp7h" in namespace "provisioning-6246"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":79,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:08.896: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/framework/framework.go:187
... skipping 23 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 22 16:17:00.943: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e44f2fd8-1d1a-4ca4-b526-722829ccb20d" in namespace "projected-7368" to be "Succeeded or Failed"
Jun 22 16:17:00.990: INFO: Pod "downwardapi-volume-e44f2fd8-1d1a-4ca4-b526-722829ccb20d": Phase="Pending", Reason="", readiness=false. Elapsed: 46.447932ms
Jun 22 16:17:03.036: INFO: Pod "downwardapi-volume-e44f2fd8-1d1a-4ca4-b526-722829ccb20d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09247177s
Jun 22 16:17:05.039: INFO: Pod "downwardapi-volume-e44f2fd8-1d1a-4ca4-b526-722829ccb20d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095528706s
Jun 22 16:17:07.039: INFO: Pod "downwardapi-volume-e44f2fd8-1d1a-4ca4-b526-722829ccb20d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09522672s
Jun 22 16:17:09.037: INFO: Pod "downwardapi-volume-e44f2fd8-1d1a-4ca4-b526-722829ccb20d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093862163s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:09.037: INFO: Pod "downwardapi-volume-e44f2fd8-1d1a-4ca4-b526-722829ccb20d" satisfied condition "Succeeded or Failed"
Jun 22 16:17:09.085: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod downwardapi-volume-e44f2fd8-1d1a-4ca4-b526-722829ccb20d container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:09.197: INFO: Waiting for pod downwardapi-volume-e44f2fd8-1d1a-4ca4-b526-722829ccb20d to disappear
Jun 22 16:17:09.253: INFO: Pod downwardapi-volume-e44f2fd8-1d1a-4ca4-b526-722829ccb20d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.806 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":30,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Probing container
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 53 lines ...
[32m• [SLOW TEST:66.640 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
[90mtest/e2e/common/node/container_probe.go:244[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":34,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:11.476: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 88 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":9,"skipped":52,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:12.966: INFO: Only supported for providers [openstack] (not gce)
... skipping 148 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and write from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:240[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":43,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:14.179: INFO: Only supported for providers [openstack] (not gce)
... skipping 62 lines ...
Jun 22 16:15:35.067: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1006
Jun 22 16:15:35.111: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1006
Jun 22 16:15:35.157: INFO: creating *v1.StatefulSet: csi-mock-volumes-1006-175/csi-mockplugin
Jun 22 16:15:35.208: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1006
Jun 22 16:15:35.257: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1006"
Jun 22 16:15:35.300: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1006 to register on node nodes-us-west4-a-z5t6
I0622 16:15:53.128258 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1006","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0622 16:15:53.368647 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0622 16:15:53.413660 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1006","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0622 16:15:53.456894 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0622 16:15:53.499445 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0622 16:15:53.939149 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-1006"},"Error":"","FullError":null}
[1mSTEP[0m: Creating pod
Jun 22 16:16:02.037: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 22 16:16:02.089: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-whtzz] to have phase Bound
I0622 16:16:02.115562 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-b6854c2b-5b56-4f8f-a983-4062273fe40c","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
Jun 22 16:16:02.135: INFO: PersistentVolumeClaim pvc-whtzz found but phase is Pending instead of Bound.
I0622 16:16:03.161507 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-b6854c2b-5b56-4f8f-a983-4062273fe40c","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-b6854c2b-5b56-4f8f-a983-4062273fe40c"}}},"Error":"","FullError":null}
Jun 22 16:16:04.180: INFO: PersistentVolumeClaim pvc-whtzz found and phase=Bound (2.090972624s)
Jun 22 16:16:04.314: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-kdfh5" in namespace "csi-mock-volumes-1006" to be "running"
Jun 22 16:16:04.359: INFO: Pod "pvc-volume-tester-kdfh5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.793929ms
I0622 16:16:05.969617 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 16:16:06.018313 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 16:16:06.068067 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 22 16:16:06.117: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:16:06.117: INFO: ExecWithOptions: Clientset creation
Jun 22 16:16:06.118: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/csi-mock-volumes-1006-175/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-1006%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-1006%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 22 16:16:06.403: INFO: Pod "pvc-volume-tester-kdfh5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089208302s
I0622 16:16:06.439010 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-1006/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-b6854c2b-5b56-4f8f-a983-4062273fe40c","storage.kubernetes.io/csiProvisionerIdentity":"1655914553520-8081-csi-mock-csi-mock-volumes-1006"}},"Response":{},"Error":"","FullError":null}
I0622 16:16:06.483955 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 16:16:06.531101 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 16:16:06.574231 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 22 16:16:06.616: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:16:06.617: INFO: ExecWithOptions: Clientset creation
Jun 22 16:16:06.617: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/csi-mock-volumes-1006-175/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F6afc8c70-2d1d-4d4a-9bb4-6869d0cf5971%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-b6854c2b-5b56-4f8f-a983-4062273fe40c%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F6afc8c70-2d1d-4d4a-9bb4-6869d0cf5971%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-b6854c2b-5b56-4f8f-a983-4062273fe40c%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 22 16:16:06.931: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:16:06.932: INFO: ExecWithOptions: Clientset creation
Jun 22 16:16:06.932: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/csi-mock-volumes-1006-175/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F6afc8c70-2d1d-4d4a-9bb4-6869d0cf5971%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-b6854c2b-5b56-4f8f-a983-4062273fe40c%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F6afc8c70-2d1d-4d4a-9bb4-6869d0cf5971%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-b6854c2b-5b56-4f8f-a983-4062273fe40c%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 22 16:16:07.260: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:16:07.261: INFO: ExecWithOptions: Clientset creation
Jun 22 16:16:07.261: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/csi-mock-volumes-1006-175/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F6afc8c70-2d1d-4d4a-9bb4-6869d0cf5971%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-b6854c2b-5b56-4f8f-a983-4062273fe40c%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0622 16:16:07.615355 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-1006/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","target_path":"/var/lib/kubelet/pods/6afc8c70-2d1d-4d4a-9bb4-6869d0cf5971/volumes/kubernetes.io~csi/pvc-b6854c2b-5b56-4f8f-a983-4062273fe40c/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-b6854c2b-5b56-4f8f-a983-4062273fe40c","storage.kubernetes.io/csiProvisionerIdentity":"1655914553520-8081-csi-mock-csi-mock-volumes-1006"}},"Response":{},"Error":"","FullError":null}
Jun 22 16:16:08.403: INFO: Pod "pvc-volume-tester-kdfh5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088904679s
Jun 22 16:16:10.404: INFO: Pod "pvc-volume-tester-kdfh5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089897422s
Jun 22 16:16:12.421: INFO: Pod "pvc-volume-tester-kdfh5": Phase="Running", Reason="", readiness=true. Elapsed: 8.1075907s
Jun 22 16:16:12.421: INFO: Pod "pvc-volume-tester-kdfh5" satisfied condition "running"
Jun 22 16:16:12.421: INFO: Deleting pod "pvc-volume-tester-kdfh5" in namespace "csi-mock-volumes-1006"
Jun 22 16:16:12.479: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kdfh5" to be fully deleted
Jun 22 16:16:12.724: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:16:12.725: INFO: ExecWithOptions: Clientset creation
Jun 22 16:16:12.725: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/csi-mock-volumes-1006-175/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F6afc8c70-2d1d-4d4a-9bb4-6869d0cf5971%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-b6854c2b-5b56-4f8f-a983-4062273fe40c%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0622 16:16:13.070362 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/6afc8c70-2d1d-4d4a-9bb4-6869d0cf5971/volumes/kubernetes.io~csi/pvc-b6854c2b-5b56-4f8f-a983-4062273fe40c/mount"},"Response":{},"Error":"","FullError":null}
I0622 16:16:13.128920 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0622 16:16:13.175910 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-1006/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount"},"Response":{},"Error":"","FullError":null}
I0622 16:16:14.633161 7134 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
[1mSTEP[0m: Checking PVC events
Jun 22 16:16:15.616: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-whtzz", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1006", SelfLink:"", UID:"b6854c2b-5b56-4f8f-a983-4062273fe40c", ResourceVersion:"2676", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 16, 16, 2, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 16, 16, 2, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0038c4d20), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0037656f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003765700), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 16:16:15.616: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-whtzz", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1006", SelfLink:"", UID:"b6854c2b-5b56-4f8f-a983-4062273fe40c", ResourceVersion:"2678", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 16, 16, 2, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1006", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1006"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 16, 16, 2, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0038c4d98), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 16, 16, 2, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0038c4dc8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003765730), VolumeMode:(*v1.PersistentVolumeMode)(0xc003765740), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 16:16:15.616: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-whtzz", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1006", SelfLink:"", UID:"b6854c2b-5b56-4f8f-a983-4062273fe40c", ResourceVersion:"2727", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 16, 16, 2, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1006", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1006"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 16, 16, 2, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003200c00), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 16, 16, 3, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003200c30), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-b6854c2b-5b56-4f8f-a983-4062273fe40c", StorageClassName:(*string)(0xc002877210), VolumeMode:(*v1.PersistentVolumeMode)(0xc0028772b0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 16:16:15.616: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-whtzz", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1006", SelfLink:"", UID:"b6854c2b-5b56-4f8f-a983-4062273fe40c", ResourceVersion:"2729", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 16, 16, 2, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1006", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1006"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 16, 16, 2, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f69080), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 16, 16, 3, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f690b0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 16, 16, 3, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f690e0), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-b6854c2b-5b56-4f8f-a983-4062273fe40c", StorageClassName:(*string)(0xc0029a7e30), VolumeMode:(*v1.PersistentVolumeMode)(0xc0029a7e40), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 22 16:16:15.616: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-whtzz", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1006", SelfLink:"", UID:"b6854c2b-5b56-4f8f-a983-4062273fe40c", ResourceVersion:"3057", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 16, 16, 2, 0, time.Local), DeletionTimestamp:time.Date(2022, time.June, 22, 16, 16, 14, 0, time.Local), DeletionGracePeriodSeconds:(*int64)(0xc001c25b08), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1006", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1006"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 16, 16, 2, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f69140), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 16, 16, 3, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f69170), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 16, 16, 3, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f691a0), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-b6854c2b-5b56-4f8f-a983-4062273fe40c", StorageClassName:(*string)(0xc0029a7e80), VolumeMode:(*v1.PersistentVolumeMode)(0xc0029a7e90), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
... skipping 48 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
storage capacity
[90mtest/e2e/storage/csi_mock_volume.go:1100[0m
exhausted, immediate binding
[90mtest/e2e/storage/csi_mock_volume.go:1158[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:14.899: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/framework/framework.go:187
... skipping 69 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with downward pod [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-downwardapi-q6b6
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 16:16:44.502: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-q6b6" in namespace "subpath-1120" to be "Succeeded or Failed"
Jun 22 16:16:44.549: INFO: Pod "pod-subpath-test-downwardapi-q6b6": Phase="Pending", Reason="", readiness=false. Elapsed: 47.457538ms
Jun 22 16:16:46.724: INFO: Pod "pod-subpath-test-downwardapi-q6b6": Phase="Running", Reason="", readiness=true. Elapsed: 2.221933373s
Jun 22 16:16:48.596: INFO: Pod "pod-subpath-test-downwardapi-q6b6": Phase="Running", Reason="", readiness=true. Elapsed: 4.094408408s
Jun 22 16:16:50.598: INFO: Pod "pod-subpath-test-downwardapi-q6b6": Phase="Running", Reason="", readiness=true. Elapsed: 6.096436936s
Jun 22 16:16:52.596: INFO: Pod "pod-subpath-test-downwardapi-q6b6": Phase="Running", Reason="", readiness=true. Elapsed: 8.09446062s
Jun 22 16:16:54.597: INFO: Pod "pod-subpath-test-downwardapi-q6b6": Phase="Running", Reason="", readiness=true. Elapsed: 10.09516068s
... skipping 5 lines ...
Jun 22 16:17:06.598: INFO: Pod "pod-subpath-test-downwardapi-q6b6": Phase="Running", Reason="", readiness=true. Elapsed: 22.095911333s
Jun 22 16:17:08.596: INFO: Pod "pod-subpath-test-downwardapi-q6b6": Phase="Running", Reason="", readiness=true. Elapsed: 24.094097326s
Jun 22 16:17:10.598: INFO: Pod "pod-subpath-test-downwardapi-q6b6": Phase="Running", Reason="", readiness=true. Elapsed: 26.096056454s
Jun 22 16:17:12.597: INFO: Pod "pod-subpath-test-downwardapi-q6b6": Phase="Running", Reason="", readiness=true. Elapsed: 28.094565531s
Jun 22 16:17:14.597: INFO: Pod "pod-subpath-test-downwardapi-q6b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.095098711s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:14.597: INFO: Pod "pod-subpath-test-downwardapi-q6b6" satisfied condition "Succeeded or Failed"
Jun 22 16:17:14.647: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod pod-subpath-test-downwardapi-q6b6 container test-container-subpath-downwardapi-q6b6: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:14.758: INFO: Waiting for pod pod-subpath-test-downwardapi-q6b6 to disappear
Jun 22 16:17:14.802: INFO: Pod pod-subpath-test-downwardapi-q6b6 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-downwardapi-q6b6
Jun 22 16:17:14.802: INFO: Deleting pod "pod-subpath-test-downwardapi-q6b6" in namespace "subpath-1120"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with downward pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:14.967: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
Jun 22 16:17:15.815: INFO: Creating a PV followed by a PVC
Jun 22 16:17:15.904: INFO: Waiting for PV local-pvvcr98 to bind to PVC pvc-q7247
Jun 22 16:17:15.904: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-q7247] to have phase Bound
Jun 22 16:17:15.947: INFO: PersistentVolumeClaim pvc-q7247 found and phase=Bound (42.658679ms)
Jun 22 16:17:15.947: INFO: Waiting up to 3m0s for PersistentVolume local-pvvcr98 to have phase Bound
Jun 22 16:17:15.989: INFO: PersistentVolume local-pvvcr98 found and phase=Bound (42.545572ms)
[It] should fail scheduling due to different NodeSelector
test/e2e/storage/persistent_volumes-local.go:381
[1mSTEP[0m: local-volume-type: dir
Jun 22 16:17:16.119: INFO: Waiting up to 5m0s for pod "pod-db8fab6b-2dcc-4949-ac67-c9fdf1c3b5b6" in namespace "persistent-local-volumes-test-2116" to be "Unschedulable"
Jun 22 16:17:16.161: INFO: Pod "pod-db8fab6b-2dcc-4949-ac67-c9fdf1c3b5b6": Phase="Pending", Reason="", readiness=false. Elapsed: 42.159119ms
Jun 22 16:17:16.161: INFO: Pod "pod-db8fab6b-2dcc-4949-ac67-c9fdf1c3b5b6" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 14 lines ...
[32m• [SLOW TEST:7.849 seconds][0m
[sig-storage] PersistentVolumes-local
[90mtest/e2e/storage/utils/framework.go:23[0m
Pod with node different from PV's NodeAffinity
[90mtest/e2e/storage/persistent_volumes-local.go:349[0m
should fail scheduling due to different NodeSelector
[90mtest/e2e/storage/persistent_volumes-local.go:381[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":6,"skipped":81,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:16.772: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 132 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:17.006: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 164 lines ...
Jun 22 16:17:04.863: INFO: PersistentVolumeClaim pvc-v8jml found but phase is Pending instead of Bound.
Jun 22 16:17:06.907: INFO: PersistentVolumeClaim pvc-v8jml found and phase=Bound (10.303765331s)
Jun 22 16:17:06.907: INFO: Waiting up to 3m0s for PersistentVolume local-td24s to have phase Bound
Jun 22 16:17:06.950: INFO: PersistentVolume local-td24s found and phase=Bound (43.18742ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-bwhn
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:17:07.085: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bwhn" in namespace "provisioning-1531" to be "Succeeded or Failed"
Jun 22 16:17:07.130: INFO: Pod "pod-subpath-test-preprovisionedpv-bwhn": Phase="Pending", Reason="", readiness=false. Elapsed: 44.959822ms
Jun 22 16:17:09.175: INFO: Pod "pod-subpath-test-preprovisionedpv-bwhn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090817729s
Jun 22 16:17:11.176: INFO: Pod "pod-subpath-test-preprovisionedpv-bwhn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09120363s
Jun 22 16:17:13.176: INFO: Pod "pod-subpath-test-preprovisionedpv-bwhn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091704736s
Jun 22 16:17:15.179: INFO: Pod "pod-subpath-test-preprovisionedpv-bwhn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094663437s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:15.179: INFO: Pod "pod-subpath-test-preprovisionedpv-bwhn" satisfied condition "Succeeded or Failed"
Jun 22 16:17:15.234: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-subpath-test-preprovisionedpv-bwhn container test-container-subpath-preprovisionedpv-bwhn: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:15.342: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bwhn to disappear
Jun 22 16:17:15.388: INFO: Pod pod-subpath-test-preprovisionedpv-bwhn no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-bwhn
Jun 22 16:17:15.388: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bwhn" in namespace "provisioning-1531"
... skipping 34 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90mtest/e2e/storage/testsuites/subpath.go:367[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:17.138: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 79 lines ...
[1mSTEP[0m: retrieving the pod
[1mSTEP[0m: looking for the results for each expected name from probers
Jun 22 16:16:36.238: INFO: File wheezy_udp@dns-test-service-3.dns-7433.svc.cluster.local from pod dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 22 16:16:36.283: INFO: File jessie_udp@dns-test-service-3.dns-7433.svc.cluster.local from pod dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 22 16:16:36.283: INFO: Lookups using dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d failed for: [wheezy_udp@dns-test-service-3.dns-7433.svc.cluster.local jessie_udp@dns-test-service-3.dns-7433.svc.cluster.local]
Jun 22 16:16:41.329: INFO: File wheezy_udp@dns-test-service-3.dns-7433.svc.cluster.local from pod dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 22 16:16:41.379: INFO: File jessie_udp@dns-test-service-3.dns-7433.svc.cluster.local from pod dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 22 16:16:41.379: INFO: Lookups using dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d failed for: [wheezy_udp@dns-test-service-3.dns-7433.svc.cluster.local jessie_udp@dns-test-service-3.dns-7433.svc.cluster.local]
Jun 22 16:16:46.331: INFO: File wheezy_udp@dns-test-service-3.dns-7433.svc.cluster.local from pod dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 22 16:16:46.380: INFO: File jessie_udp@dns-test-service-3.dns-7433.svc.cluster.local from pod dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 22 16:16:46.381: INFO: Lookups using dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d failed for: [wheezy_udp@dns-test-service-3.dns-7433.svc.cluster.local jessie_udp@dns-test-service-3.dns-7433.svc.cluster.local]
Jun 22 16:16:51.331: INFO: File wheezy_udp@dns-test-service-3.dns-7433.svc.cluster.local from pod dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 22 16:16:51.380: INFO: File jessie_udp@dns-test-service-3.dns-7433.svc.cluster.local from pod dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 22 16:16:51.380: INFO: Lookups using dns-7433/dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d failed for: [wheezy_udp@dns-test-service-3.dns-7433.svc.cluster.local jessie_udp@dns-test-service-3.dns-7433.svc.cluster.local]
Jun 22 16:16:56.422: INFO: DNS probes using dns-test-8dd9ea7e-8de0-4dfd-a566-0c0b55bd369d succeeded
[1mSTEP[0m: deleting the pod
[1mSTEP[0m: changing the service to type=ClusterIP
[1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7433.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7433.svc.cluster.local; sleep 1; done
... skipping 31 lines ...
[32m• [SLOW TEST:58.014 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should provide DNS for ExternalName services [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 6 lines ...
[It] should support file as subpath [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:232
Jun 22 16:16:50.826: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 22 16:16:50.826: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-nbxg
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 16:16:50.908: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-nbxg" in namespace "provisioning-9934" to be "Succeeded or Failed"
Jun 22 16:16:50.954: INFO: Pod "pod-subpath-test-inlinevolume-nbxg": Phase="Pending", Reason="", readiness=false. Elapsed: 45.876878ms
Jun 22 16:16:52.998: INFO: Pod "pod-subpath-test-inlinevolume-nbxg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089435113s
Jun 22 16:16:55.002: INFO: Pod "pod-subpath-test-inlinevolume-nbxg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093948057s
Jun 22 16:16:57.012: INFO: Pod "pod-subpath-test-inlinevolume-nbxg": Phase="Running", Reason="", readiness=true. Elapsed: 6.103148372s
Jun 22 16:16:59.008: INFO: Pod "pod-subpath-test-inlinevolume-nbxg": Phase="Running", Reason="", readiness=true. Elapsed: 8.099289375s
Jun 22 16:17:01.003: INFO: Pod "pod-subpath-test-inlinevolume-nbxg": Phase="Running", Reason="", readiness=true. Elapsed: 10.094649535s
... skipping 5 lines ...
Jun 22 16:17:12.998: INFO: Pod "pod-subpath-test-inlinevolume-nbxg": Phase="Running", Reason="", readiness=true. Elapsed: 22.089623595s
Jun 22 16:17:15.000: INFO: Pod "pod-subpath-test-inlinevolume-nbxg": Phase="Running", Reason="", readiness=true. Elapsed: 24.091488452s
Jun 22 16:17:17.002: INFO: Pod "pod-subpath-test-inlinevolume-nbxg": Phase="Running", Reason="", readiness=true. Elapsed: 26.093153422s
Jun 22 16:17:18.999: INFO: Pod "pod-subpath-test-inlinevolume-nbxg": Phase="Running", Reason="", readiness=true. Elapsed: 28.091069356s
Jun 22 16:17:21.004: INFO: Pod "pod-subpath-test-inlinevolume-nbxg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.095552127s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:21.004: INFO: Pod "pod-subpath-test-inlinevolume-nbxg" satisfied condition "Succeeded or Failed"
Jun 22 16:17:21.048: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-subpath-test-inlinevolume-nbxg container test-container-subpath-inlinevolume-nbxg: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:21.149: INFO: Waiting for pod pod-subpath-test-inlinevolume-nbxg to disappear
Jun 22 16:17:21.191: INFO: Pod pod-subpath-test-inlinevolume-nbxg no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-nbxg
Jun 22 16:17:21.191: INFO: Deleting pod "pod-subpath-test-inlinevolume-nbxg" in namespace "provisioning-9934"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":34,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:21.392: INFO: Only supported for providers [aws] (not gce)
... skipping 57 lines ...
[1mSTEP[0m: Scaling down replication controller to zero
[1mSTEP[0m: Scaling ReplicationController slow-terminating-unready-pod in namespace services-8585 to 0
[1mSTEP[0m: Update service to not tolerate unready services
[1mSTEP[0m: Check if pod is unreachable
Jun 22 16:17:19.396: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=services-8585 exec execpod-sfmgs -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8585.svc.cluster.local:80/; test "$?" -ne "0"'
Jun 22 16:17:19.994: INFO: rc: 1
Jun 22 16:17:19.995: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: NOW: 2022-06-22 16:17:19.95304193 +0000 UTC m=+21.329563974, err error running /logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=services-8585 exec execpod-sfmgs -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8585.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2022-06-22 16:17:19.95304193 +0000 UTC m=+21.329563974
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-8585.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1
error:
exit status 1
Jun 22 16:17:21.995: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=services-8585 exec execpod-sfmgs -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8585.svc.cluster.local:80/; test "$?" -ne "0"'
Jun 22 16:17:22.611: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-8585.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Jun 22 16:17:22.611: INFO: stdout: ""
[1mSTEP[0m: Update service to tolerate unready services again
[1mSTEP[0m: Check if terminating pod is available through service
... skipping 119 lines ...
[32m• [SLOW TEST:16.801 seconds][0m
[sig-apps] ReplicaSet
[90mtest/e2e/apps/framework.go:23[0m
should list and delete a collection of ReplicaSets [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":5,"skipped":42,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:28.350: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 55 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
test/e2e/common/storage/projected_secret.go:92
[1mSTEP[0m: Creating projection with secret that has name projected-secret-test-8d620b84-49d5-42d2-8dbe-78d8919a68de
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 16:17:17.766: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e" in namespace "projected-998" to be "Succeeded or Failed"
Jun 22 16:17:17.810: INFO: Pod "pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 43.841874ms
Jun 22 16:17:19.860: INFO: Pod "pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094135754s
Jun 22 16:17:21.857: INFO: Pod "pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090912698s
Jun 22 16:17:23.855: INFO: Pod "pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08867715s
Jun 22 16:17:25.855: INFO: Pod "pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088934562s
Jun 22 16:17:27.854: INFO: Pod "pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.087893322s
Jun 22 16:17:29.859: INFO: Pod "pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.092600527s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:29.859: INFO: Pod "pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e" satisfied condition "Succeeded or Failed"
Jun 22 16:17:29.907: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e container projected-secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:30.010: INFO: Waiting for pod pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e to disappear
Jun 22 16:17:30.053: INFO: Pod pod-projected-secrets-17bfea64-61d9-4a7d-a620-349d53656b5e no longer exists
[AfterEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:187
... skipping 5 lines ...
[32m• [SLOW TEST:13.035 seconds][0m
[sig-storage] Projected secret
[90mtest/e2e/common/storage/framework.go:23[0m
should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
[90mtest/e2e/common/storage/projected_secret.go:92[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":6,"skipped":57,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 60 lines ...
Jun 22 16:16:53.678: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathz8qmj] to have phase Bound
Jun 22 16:16:53.722: INFO: PersistentVolumeClaim csi-hostpathz8qmj found but phase is Pending instead of Bound.
Jun 22 16:16:55.796: INFO: PersistentVolumeClaim csi-hostpathz8qmj found but phase is Pending instead of Bound.
Jun 22 16:16:57.852: INFO: PersistentVolumeClaim csi-hostpathz8qmj found and phase=Bound (4.173248414s)
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-92pd
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:16:58.060: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-92pd" in namespace "provisioning-8332" to be "Succeeded or Failed"
Jun 22 16:16:58.138: INFO: Pod "pod-subpath-test-dynamicpv-92pd": Phase="Pending", Reason="", readiness=false. Elapsed: 77.863379ms
Jun 22 16:17:00.189: INFO: Pod "pod-subpath-test-dynamicpv-92pd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128885135s
Jun 22 16:17:02.182: INFO: Pod "pod-subpath-test-dynamicpv-92pd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122320123s
Jun 22 16:17:04.184: INFO: Pod "pod-subpath-test-dynamicpv-92pd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124635953s
Jun 22 16:17:06.184: INFO: Pod "pod-subpath-test-dynamicpv-92pd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124585656s
Jun 22 16:17:08.192: INFO: Pod "pod-subpath-test-dynamicpv-92pd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.131930401s
Jun 22 16:17:10.185: INFO: Pod "pod-subpath-test-dynamicpv-92pd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.125675569s
Jun 22 16:17:12.183: INFO: Pod "pod-subpath-test-dynamicpv-92pd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.123366949s
Jun 22 16:17:14.187: INFO: Pod "pod-subpath-test-dynamicpv-92pd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.127337632s
Jun 22 16:17:16.185: INFO: Pod "pod-subpath-test-dynamicpv-92pd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.12562189s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:16.186: INFO: Pod "pod-subpath-test-dynamicpv-92pd" satisfied condition "Succeeded or Failed"
Jun 22 16:17:16.237: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-subpath-test-dynamicpv-92pd container test-container-volume-dynamicpv-92pd: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:16.334: INFO: Waiting for pod pod-subpath-test-dynamicpv-92pd to disappear
Jun 22 16:17:16.377: INFO: Pod pod-subpath-test-dynamicpv-92pd no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-92pd
Jun 22 16:17:16.377: INFO: Deleting pod "pod-subpath-test-dynamicpv-92pd" in namespace "provisioning-8332"
... skipping 60 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":30,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ResourceQuota
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 20 lines ...
[32m• [SLOW TEST:16.828 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a secret. [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":5,"skipped":52,"failed":0}
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":10,"skipped":59,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:17:29.931: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubectl
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:17:31.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-1000" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":11,"skipped":59,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 132 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":37,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] HostPort
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 78 lines ...
[32m• [SLOW TEST:37.341 seconds][0m
[sig-network] HostPort
[90mtest/e2e/network/common/framework.go:23[0m
validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":54,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 90 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Update Demo
[90mtest/e2e/kubectl/kubectl.go:322[0m
should create and stop a replication controller [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":3,"skipped":35,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:33.986: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 14 lines ...
[36mDriver local doesn't support InlineVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":2,"skipped":18,"failed":0}
[BeforeEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:17:23.690: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename projected
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating secret with name projected-secret-test-a73fb479-9039-4995-b308-85a274d793a2
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 16:17:24.114: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-56c2c99e-3ae4-4b87-86d4-edaa0d6b18ba" in namespace "projected-3581" to be "Succeeded or Failed"
Jun 22 16:17:24.161: INFO: Pod "pod-projected-secrets-56c2c99e-3ae4-4b87-86d4-edaa0d6b18ba": Phase="Pending", Reason="", readiness=false. Elapsed: 46.928869ms
Jun 22 16:17:26.208: INFO: Pod "pod-projected-secrets-56c2c99e-3ae4-4b87-86d4-edaa0d6b18ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093280049s
Jun 22 16:17:28.216: INFO: Pod "pod-projected-secrets-56c2c99e-3ae4-4b87-86d4-edaa0d6b18ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101295913s
Jun 22 16:17:30.210: INFO: Pod "pod-projected-secrets-56c2c99e-3ae4-4b87-86d4-edaa0d6b18ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095474092s
Jun 22 16:17:32.221: INFO: Pod "pod-projected-secrets-56c2c99e-3ae4-4b87-86d4-edaa0d6b18ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10645967s
Jun 22 16:17:34.208: INFO: Pod "pod-projected-secrets-56c2c99e-3ae4-4b87-86d4-edaa0d6b18ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093522457s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:34.208: INFO: Pod "pod-projected-secrets-56c2c99e-3ae4-4b87-86d4-edaa0d6b18ba" satisfied condition "Succeeded or Failed"
Jun 22 16:17:34.255: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-projected-secrets-56c2c99e-3ae4-4b87-86d4-edaa0d6b18ba container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:34.367: INFO: Waiting for pod pod-projected-secrets-56c2c99e-3ae4-4b87-86d4-edaa0d6b18ba to disappear
Jun 22 16:17:34.412: INFO: Pod pod-projected-secrets-56c2c99e-3ae4-4b87-86d4-edaa0d6b18ba no longer exists
[AfterEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:187
... skipping 28 lines ...
[32m• [SLOW TEST:5.865 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should not be blocked by dependency circle [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":6,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:36.935: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 100 lines ...
[32m• [SLOW TEST:21.258 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should be able to deny attaching pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":7,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:40.558: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/framework/framework.go:187
... skipping 142 lines ...
[32m• [SLOW TEST:26.652 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should serve a basic endpoint from pods [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":7,"skipped":101,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:43.548: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 35 lines ...
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[36mDriver hostPath doesn't support PreprovisionedPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":2,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:16:21.644: INFO: >>> kubeConfig: /root/.kube/config
... skipping 138 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should create read/write inline ephemeral volume
[90mtest/e2e/storage/testsuites/ephemeral.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":3,"skipped":20,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:44.229: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 123 lines ...
[32m• [SLOW TEST:15.911 seconds][0m
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should be able to convert a non homogeneous list of CRs [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":7,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:46.143: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 55 lines ...
Jun 22 16:17:44.736: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
Jun 22 16:17:44.736: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-9993 describe pod agnhost-primary-nm8gg'
Jun 22 16:17:45.085: INFO: stderr: ""
Jun 22 16:17:45.085: INFO: stdout: "Name: agnhost-primary-nm8gg\nNamespace: kubectl-9993\nPriority: 0\nNode: nodes-us-west4-a-7gg3/10.0.16.5\nStart Time: Wed, 22 Jun 2022 16:17:35 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 100.96.4.46\nIPs:\n IP: 100.96.4.46\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://3ec807eb7f365d6abae5d6158dfcfc344f7233c3ea2b3faf01c787dd6422698a\n Image: registry.k8s.io/e2e-test-images/agnhost:2.39\n Image ID: registry.k8s.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 22 Jun 2022 16:17:36 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7nb8 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-b7nb8:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10s default-scheduler Successfully assigned kubectl-9993/agnhost-primary-nm8gg to nodes-us-west4-a-7gg3\n Normal Pulled 9s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 9s kubelet Created container agnhost-primary\n Normal Started 9s kubelet Started container agnhost-primary\n"
Jun 22 16:17:45.085: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-9993 describe rc agnhost-primary'
Jun 22 16:17:45.446: INFO: stderr: ""
Jun 22 16:17:45.446: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9993\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.39\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 10s replication-controller Created pod: agnhost-primary-nm8gg\n"
Jun 22 16:17:45.446: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-9993 describe service agnhost-primary'
Jun 22 16:17:45.795: INFO: stderr: ""
Jun 22 16:17:45.795: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9993\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 100.66.220.73\nIPs: 100.66.220.73\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 100.96.4.46:6379\nSession Affinity: None\nEvents: <none>\n"
Jun 22 16:17:45.848: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-9993 describe node master-us-west4-a-6m23'
Jun 22 16:17:46.566: INFO: stderr: ""
Jun 22 16:17:46.566: INFO: stdout: "Name: master-us-west4-a-6m23\nRoles: control-plane\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=e2-standard-2\n beta.kubernetes.io/os=linux\n cloud.google.com/metadata-proxy-ready=true\n failure-domain.beta.kubernetes.io/region=us-west4\n failure-domain.beta.kubernetes.io/zone=us-west4-a\n kops.k8s.io/instancegroup=master-us-west4-a\n kops.k8s.io/kops-controller-pki=\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master-us-west4-a-6m23\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node.kubernetes.io/exclude-from-external-load-balancers=\n node.kubernetes.io/instance-type=e2-standard-2\n topology.gke.io/zone=us-west4-a\n topology.kubernetes.io/region=us-west4\n topology.kubernetes.io/zone=us-west4-a\nAnnotations: csi.volume.kubernetes.io/nodeid:\n {\"pd.csi.storage.gke.io\":\"projects/gce-gci-upg-1-3-lat-ctl-skew/zones/us-west4-a/instances/master-us-west4-a-6m23\"}\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 22 Jun 2022 16:10:20 +0000\nTaints: node-role.kubernetes.io/control-plane:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master-us-west4-a-6m23\n AcquireTime: <unset>\n RenewTime: Wed, 22 Jun 2022 16:17:42 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Wed, 22 Jun 2022 16:11:46 +0000 Wed, 22 Jun 2022 16:11:46 +0000 RouteCreated RouteController created a route\n MemoryPressure False Wed, 22 Jun 2022 16:17:08 +0000 Wed, 22 Jun 2022 16:10:18 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 22 Jun 2022 16:17:08 +0000 Wed, 22 Jun 2022 16:10:18 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 22 Jun 2022 16:17:08 +0000 Wed, 22 Jun 2022 16:10:18 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 22 Jun 2022 16:17:08 +0000 Wed, 22 Jun 2022 16:11:21 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.0.16.6\n ExternalIP: 34.125.12.204\n Hostname: master-us-west4-a-6m23\nCapacity:\n cpu: 2\n ephemeral-storage: 48600704Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 8145396Ki\n pods: 110\nAllocatable:\n cpu: 2\n ephemeral-storage: 44790408733\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 8042996Ki\n pods: 110\nSystem Info:\n Machine ID: 1e2993bd6be4cd9ed6dc368d243d8192\n System UUID: 1e2993bd-6be4-cd9e-d6dc-368d243d8192\n Boot ID: 130a3f96-0656-4507-b9d6-688d159e3385\n Kernel Version: 5.11.0-1028-gcp\n OS Image: Ubuntu 20.04.3 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.6\n Kubelet Version: v1.25.0-alpha.1\n Kube-Proxy Version: v1.25.0-alpha.1\nPodCIDR: 100.96.0.0/24\nPodCIDRs: 100.96.0.0/24\nProviderID: gce://gce-gci-upg-1-3-lat-ctl-skew/us-west4-a/master-us-west4-a-6m23\nNon-terminated Pods: (12 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n gce-pd-csi-driver csi-gce-pd-controller-9f559494d-pg5sx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m25s\n gce-pd-csi-driver csi-gce-pd-node-2pbtf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m25s\n kube-system cloud-controller-manager-hhm8v 200m (10%) 0 (0%) 0 (0%) 0 (0%) 6m25s\n kube-system dns-controller-78bc9bdd66-hdgmb 50m (2%) 0 (0%) 50Mi (0%) 0 (0%) 6m25s\n kube-system etcd-manager-events-master-us-west4-a-6m23 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 6m49s\n kube-system etcd-manager-main-master-us-west4-a-6m23 200m (10%) 0 (0%) 100Mi (1%) 0 (0%) 6m3s\n kube-system kops-controller-6k6n8 50m (2%) 0 (0%) 50Mi (0%) 0 (0%) 6m25s\n kube-system kube-apiserver-master-us-west4-a-6m23 150m (7%) 0 (0%) 0 (0%) 0 (0%) 6m13s\n kube-system kube-controller-manager-master-us-west4-a-6m23 100m (5%) 0 (0%) 0 (0%) 0 (0%) 6m54s\n kube-system kube-proxy-master-us-west4-a-6m23 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m40s\n kube-system kube-scheduler-master-us-west4-a-6m23 100m (5%) 0 (0%) 0 (0%) 0 (0%) 6m10s\n kube-system metadata-proxy-v0.12-txbkj 32m (1%) 32m (1%) 45Mi (0%) 45Mi (0%) 6m6s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1082m (54%) 32m (1%)\n memory 345Mi (4%) 45Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal NodeAllocatableEnforced 8m50s kubelet Updated Node Allocatable limit across pods\n Normal NodeHasSufficientMemory 8m50s (x8 over 8m53s) kubelet Node master-us-west4-a-6m23 status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 8m50s (x7 over 8m53s) kubelet Node master-us-west4-a-6m23 status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 8m50s (x7 over 8m53s) kubelet Node master-us-west4-a-6m23 status is now: NodeHasSufficientPID\n Normal RegisteredNode 6m26s node-controller Node master-us-west4-a-6m23 event: Registered Node master-us-west4-a-6m23 in Controller\n Normal Synced 6m7s (x3 over 6m8s) cloud-node-controller Node synced successfully\n Normal CIDRNotAvailable 5m23s (x10 over 6m7s) cidrAllocator Node master-us-west4-a-6m23 status is now: CIDRNotAvailable\n"
... skipping 11 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl describe
[90mtest/e2e/kubectl/kubectl.go:1259[0m
should check if kubectl describe prints relevant information for rc and pods [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":4,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:47.108: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 101 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":10,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:17:27.535: INFO: >>> kubeConfig: /root/.kube/config
... skipping 21 lines ...
Jun 22 16:17:34.566: INFO: PersistentVolumeClaim pvc-55zml found but phase is Pending instead of Bound.
Jun 22 16:17:36.613: INFO: PersistentVolumeClaim pvc-55zml found and phase=Bound (6.184740992s)
Jun 22 16:17:36.613: INFO: Waiting up to 3m0s for PersistentVolume local-ch6gs to have phase Bound
Jun 22 16:17:36.655: INFO: PersistentVolume local-ch6gs found and phase=Bound (42.252047ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-kwfx
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:17:36.790: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kwfx" in namespace "provisioning-3631" to be "Succeeded or Failed"
Jun 22 16:17:36.835: INFO: Pod "pod-subpath-test-preprovisionedpv-kwfx": Phase="Pending", Reason="", readiness=false. Elapsed: 45.731705ms
Jun 22 16:17:38.880: INFO: Pod "pod-subpath-test-preprovisionedpv-kwfx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090025125s
Jun 22 16:17:40.879: INFO: Pod "pod-subpath-test-preprovisionedpv-kwfx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088876559s
Jun 22 16:17:42.878: INFO: Pod "pod-subpath-test-preprovisionedpv-kwfx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088544231s
Jun 22 16:17:44.881: INFO: Pod "pod-subpath-test-preprovisionedpv-kwfx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091420543s
Jun 22 16:17:46.914: INFO: Pod "pod-subpath-test-preprovisionedpv-kwfx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.123987722s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:46.914: INFO: Pod "pod-subpath-test-preprovisionedpv-kwfx" satisfied condition "Succeeded or Failed"
Jun 22 16:17:46.979: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-subpath-test-preprovisionedpv-kwfx container test-container-subpath-preprovisionedpv-kwfx: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:47.096: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kwfx to disappear
Jun 22 16:17:47.142: INFO: Pod pod-subpath-test-preprovisionedpv-kwfx no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-kwfx
Jun 22 16:17:47.142: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kwfx" in namespace "provisioning-3631"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":11,"skipped":35,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:47.850: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 24 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
test/e2e/common/storage/empty_dir.go:71
[1mSTEP[0m: Creating a pod to test emptydir volume type on node default medium
Jun 22 16:17:33.661: INFO: Waiting up to 5m0s for pod "pod-8e8fc1a8-0203-44c0-b734-45c768524af6" in namespace "emptydir-6285" to be "Succeeded or Failed"
Jun 22 16:17:33.708: INFO: Pod "pod-8e8fc1a8-0203-44c0-b734-45c768524af6": Phase="Pending", Reason="", readiness=false. Elapsed: 46.730455ms
Jun 22 16:17:35.762: INFO: Pod "pod-8e8fc1a8-0203-44c0-b734-45c768524af6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101316496s
Jun 22 16:17:37.759: INFO: Pod "pod-8e8fc1a8-0203-44c0-b734-45c768524af6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098436339s
Jun 22 16:17:39.757: INFO: Pod "pod-8e8fc1a8-0203-44c0-b734-45c768524af6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09639555s
Jun 22 16:17:41.755: INFO: Pod "pod-8e8fc1a8-0203-44c0-b734-45c768524af6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094078277s
Jun 22 16:17:43.755: INFO: Pod "pod-8e8fc1a8-0203-44c0-b734-45c768524af6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093594918s
Jun 22 16:17:45.755: INFO: Pod "pod-8e8fc1a8-0203-44c0-b734-45c768524af6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.093675373s
Jun 22 16:17:47.755: INFO: Pod "pod-8e8fc1a8-0203-44c0-b734-45c768524af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.094264195s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:47.755: INFO: Pod "pod-8e8fc1a8-0203-44c0-b734-45c768524af6" satisfied condition "Succeeded or Failed"
Jun 22 16:17:47.802: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-8e8fc1a8-0203-44c0-b734-45c768524af6 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:47.903: INFO: Waiting for pod pod-8e8fc1a8-0203-44c0-b734-45c768524af6 to disappear
Jun 22 16:17:47.950: INFO: Pod pod-8e8fc1a8-0203-44c0-b734-45c768524af6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 6 lines ...
[90mtest/e2e/common/storage/framework.go:23[0m
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/empty_dir.go:48[0m
volume on default medium should have the correct mode using FSGroup
[90mtest/e2e/common/storage/empty_dir.go:71[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":7,"skipped":55,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:48.072: INFO: Only supported for providers [azure] (not gce)
... skipping 58 lines ...
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[36mDriver local doesn't support InlineVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:17:34.527: INFO: >>> kubeConfig: /root/.kube/config
... skipping 3 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:382
Jun 22 16:17:34.876: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 22 16:17:34.876: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-57xf
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:17:34.931: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-57xf" in namespace "provisioning-1422" to be "Succeeded or Failed"
Jun 22 16:17:34.983: INFO: Pod "pod-subpath-test-inlinevolume-57xf": Phase="Pending", Reason="", readiness=false. Elapsed: 52.01288ms
Jun 22 16:17:37.031: INFO: Pod "pod-subpath-test-inlinevolume-57xf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100302559s
Jun 22 16:17:39.029: INFO: Pod "pod-subpath-test-inlinevolume-57xf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098618935s
Jun 22 16:17:41.029: INFO: Pod "pod-subpath-test-inlinevolume-57xf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098030235s
Jun 22 16:17:43.032: INFO: Pod "pod-subpath-test-inlinevolume-57xf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100997994s
Jun 22 16:17:45.034: INFO: Pod "pod-subpath-test-inlinevolume-57xf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.102946424s
Jun 22 16:17:47.035: INFO: Pod "pod-subpath-test-inlinevolume-57xf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.104443787s
Jun 22 16:17:49.033: INFO: Pod "pod-subpath-test-inlinevolume-57xf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.102021881s
Jun 22 16:17:51.049: INFO: Pod "pod-subpath-test-inlinevolume-57xf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.118518792s
Jun 22 16:17:53.035: INFO: Pod "pod-subpath-test-inlinevolume-57xf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.104749283s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:53.036: INFO: Pod "pod-subpath-test-inlinevolume-57xf" satisfied condition "Succeeded or Failed"
Jun 22 16:17:53.096: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-subpath-test-inlinevolume-57xf container test-container-subpath-inlinevolume-57xf: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:17:53.222: INFO: Waiting for pod pod-subpath-test-inlinevolume-57xf to disappear
Jun 22 16:17:53.291: INFO: Pod pod-subpath-test-inlinevolume-57xf no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-57xf
Jun 22 16:17:53.291: INFO: Deleting pod "pod-subpath-test-inlinevolume-57xf" in namespace "provisioning-1422"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":18,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:53.623: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 160 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: blockfs]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 77 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl server-side dry-run
[90mtest/e2e/kubectl/kubectl.go:954[0m
should check if kubectl can dry-run update Pods [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":8,"skipped":59,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:55.678: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 93 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
test/e2e/common/node/sysctl.go:67
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
test/e2e/common/node/sysctl.go:159
[1mSTEP[0m: Creating a pod with an ignorelisted, but not allowlisted sysctl on the node
[1mSTEP[0m: Wait for pod failed reason
Jun 22 16:17:43.929: INFO: Waiting up to 5m0s for pod "sysctl-604fcf2b-02dc-4a1b-ad89-b4e9c0b4f694" in namespace "sysctl-4410" to be "failed with reason SysctlForbidden"
Jun 22 16:17:43.972: INFO: Pod "sysctl-604fcf2b-02dc-4a1b-ad89-b4e9c0b4f694": Phase="Pending", Reason="", readiness=false. Elapsed: 43.02166ms
Jun 22 16:17:46.017: INFO: Pod "sysctl-604fcf2b-02dc-4a1b-ad89-b4e9c0b4f694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087538366s
Jun 22 16:17:48.023: INFO: Pod "sysctl-604fcf2b-02dc-4a1b-ad89-b4e9c0b4f694": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093677748s
Jun 22 16:17:50.023: INFO: Pod "sysctl-604fcf2b-02dc-4a1b-ad89-b4e9c0b4f694": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093757616s
Jun 22 16:17:52.022: INFO: Pod "sysctl-604fcf2b-02dc-4a1b-ad89-b4e9c0b4f694": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093365036s
Jun 22 16:17:54.016: INFO: Pod "sysctl-604fcf2b-02dc-4a1b-ad89-b4e9c0b4f694": Phase="Pending", Reason="", readiness=false. Elapsed: 10.087048435s
Jun 22 16:17:56.017: INFO: Pod "sysctl-604fcf2b-02dc-4a1b-ad89-b4e9c0b4f694": Phase="Failed", Reason="SysctlForbidden", readiness=false. Elapsed: 12.087595517s
Jun 22 16:17:56.017: INFO: Pod "sysctl-604fcf2b-02dc-4a1b-ad89-b4e9c0b4f694" satisfied condition "failed with reason SysctlForbidden"
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
test/e2e/framework/framework.go:187
Jun 22 16:17:56.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "sysctl-4410" for this suite.
[32m• [SLOW TEST:12.534 seconds][0m
[sig-node] Sysctls [LinuxOnly] [NodeConformance]
[90mtest/e2e/common/node/framework.go:23[0m
should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
[90mtest/e2e/common/node/sysctl.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":8,"skipped":111,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] EndpointSlice
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 19 lines ...
[32m• [SLOW TEST:41.383 seconds][0m
[sig-network] EndpointSlice
[90mtest/e2e/network/common/framework.go:23[0m
should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":3,"skipped":39,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:17:58.517: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 60 lines ...
[32m• [SLOW TEST:13.211 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should support cascading deletion of custom resources
[90mtest/e2e/apimachinery/garbage_collector.go:905[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":5,"skipped":62,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Ephemeralstorage
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 19 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
When pod refers to non-existent ephemeral storage
[90mtest/e2e/storage/ephemeral_volume.go:55[0m
should allow deletion of pod with invalid volume : configmap
[90mtest/e2e/storage/ephemeral_volume.go:57[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":2,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:00.795: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 59 lines ...
[32m• [SLOW TEST:118.582 seconds][0m
[sig-apps] CronJob
[90mtest/e2e/apps/framework.go:23[0m
should schedule multiple jobs concurrently [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:02.340: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 150 lines ...
[32m• [SLOW TEST:60.537 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:07.289: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium
Jun 22 16:17:44.798: INFO: Waiting up to 5m0s for pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f" in namespace "emptydir-4249" to be "Succeeded or Failed"
Jun 22 16:17:44.852: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 54.079808ms
Jun 22 16:17:46.916: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117854661s
Jun 22 16:17:48.900: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102733582s
Jun 22 16:17:50.899: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10173112s
Jun 22 16:17:52.898: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100778158s
Jun 22 16:17:54.899: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.101216002s
... skipping 2 lines ...
Jun 22 16:18:00.899: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.101380736s
Jun 22 16:18:02.901: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.103052241s
Jun 22 16:18:04.899: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.101418809s
Jun 22 16:18:06.901: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.103071981s
Jun 22 16:18:08.899: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.100962281s
[1mSTEP[0m: Saw pod success
Jun 22 16:18:08.899: INFO: Pod "pod-12644c35-1ffc-4559-8138-f4bf6903d89f" satisfied condition "Succeeded or Failed"
Jun 22 16:18:08.946: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-12644c35-1ffc-4559-8138-f4bf6903d89f container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:18:09.053: INFO: Waiting for pod pod-12644c35-1ffc-4559-8138-f4bf6903d89f to disappear
Jun 22 16:18:09.104: INFO: Pod pod-12644c35-1ffc-4559-8138-f4bf6903d89f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:24.833 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":56,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:09.265: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
... skipping 109 lines ...
[32m• [SLOW TEST:23.477 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should update labels on modification [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":66,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:11.626: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 97 lines ...
Jun 22 16:17:35.096: INFO: PersistentVolumeClaim pvc-c7vk7 found but phase is Pending instead of Bound.
Jun 22 16:17:37.148: INFO: PersistentVolumeClaim pvc-c7vk7 found and phase=Bound (10.285240355s)
Jun 22 16:17:37.148: INFO: Waiting up to 3m0s for PersistentVolume local-bztmz to have phase Bound
Jun 22 16:17:37.194: INFO: PersistentVolume local-bztmz found and phase=Bound (45.629954ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-227k
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 16:17:37.337: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-227k" in namespace "provisioning-8109" to be "Succeeded or Failed"
Jun 22 16:17:37.392: INFO: Pod "pod-subpath-test-preprovisionedpv-227k": Phase="Pending", Reason="", readiness=false. Elapsed: 55.229846ms
Jun 22 16:17:39.441: INFO: Pod "pod-subpath-test-preprovisionedpv-227k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104202782s
Jun 22 16:17:41.441: INFO: Pod "pod-subpath-test-preprovisionedpv-227k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103595024s
Jun 22 16:17:43.441: INFO: Pod "pod-subpath-test-preprovisionedpv-227k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104188054s
Jun 22 16:17:45.441: INFO: Pod "pod-subpath-test-preprovisionedpv-227k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104213676s
Jun 22 16:17:47.437: INFO: Pod "pod-subpath-test-preprovisionedpv-227k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.100205306s
... skipping 8 lines ...
Jun 22 16:18:05.440: INFO: Pod "pod-subpath-test-preprovisionedpv-227k": Phase="Running", Reason="", readiness=true. Elapsed: 28.102449689s
Jun 22 16:18:07.440: INFO: Pod "pod-subpath-test-preprovisionedpv-227k": Phase="Running", Reason="", readiness=true. Elapsed: 30.102656171s
Jun 22 16:18:09.441: INFO: Pod "pod-subpath-test-preprovisionedpv-227k": Phase="Running", Reason="", readiness=true. Elapsed: 32.103302069s
Jun 22 16:18:11.443: INFO: Pod "pod-subpath-test-preprovisionedpv-227k": Phase="Running", Reason="", readiness=true. Elapsed: 34.105997981s
Jun 22 16:18:13.441: INFO: Pod "pod-subpath-test-preprovisionedpv-227k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.103808204s
[1mSTEP[0m: Saw pod success
Jun 22 16:18:13.441: INFO: Pod "pod-subpath-test-preprovisionedpv-227k" satisfied condition "Succeeded or Failed"
Jun 22 16:18:13.489: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-subpath-test-preprovisionedpv-227k container test-container-subpath-preprovisionedpv-227k: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:18:13.599: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-227k to disappear
Jun 22 16:18:13.647: INFO: Pod pod-subpath-test-preprovisionedpv-227k no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-227k
Jun 22 16:18:13.647: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-227k" in namespace "provisioning-8109"
... skipping 34 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":32,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Pods Extended
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 29 lines ...
Jun 22 16:18:04.817: INFO: Pod "pod-should-be-evicted444a350a-d351-41ed-8c1d-9396340c690d": Phase="Running", Reason="", readiness=true. Elapsed: 36.08855199s
Jun 22 16:18:06.818: INFO: Pod "pod-should-be-evicted444a350a-d351-41ed-8c1d-9396340c690d": Phase="Running", Reason="", readiness=true. Elapsed: 38.090067211s
Jun 22 16:18:08.816: INFO: Pod "pod-should-be-evicted444a350a-d351-41ed-8c1d-9396340c690d": Phase="Running", Reason="", readiness=true. Elapsed: 40.088286856s
Jun 22 16:18:10.816: INFO: Pod "pod-should-be-evicted444a350a-d351-41ed-8c1d-9396340c690d": Phase="Running", Reason="", readiness=true. Elapsed: 42.08786533s
Jun 22 16:18:12.817: INFO: Pod "pod-should-be-evicted444a350a-d351-41ed-8c1d-9396340c690d": Phase="Running", Reason="", readiness=true. Elapsed: 44.088581588s
Jun 22 16:18:14.822: INFO: Pod "pod-should-be-evicted444a350a-d351-41ed-8c1d-9396340c690d": Phase="Running", Reason="", readiness=true. Elapsed: 46.094348998s
Jun 22 16:18:16.824: INFO: Pod "pod-should-be-evicted444a350a-d351-41ed-8c1d-9396340c690d": Phase="Failed", Reason="Evicted", readiness=false. Elapsed: 48.096262786s
Jun 22 16:18:16.824: INFO: Pod "pod-should-be-evicted444a350a-d351-41ed-8c1d-9396340c690d" satisfied condition "terminated with reason Evicted"
[1mSTEP[0m: deleting the pod
[AfterEach] [sig-node] Pods Extended
test/e2e/framework/framework.go:187
Jun 22 16:18:16.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "pods-9197" for this suite.
... skipping 4 lines ...
[90mtest/e2e/node/framework.go:23[0m
Pod Container lifecycle
[90mtest/e2e/node/pods.go:226[0m
evicted pods should be terminal
[90mtest/e2e/node/pods.go:302[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal","total":-1,"completed":6,"skipped":47,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":6,"skipped":32,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:17:39.997: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename port-forwarding
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 65 lines ...
[90mtest/e2e/kubectl/portforward.go:454[0m
that expects a client request
[90mtest/e2e/kubectl/portforward.go:455[0m
should support a client that connects, sends DATA, and disconnects
[90mtest/e2e/kubectl/portforward.go:459[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":7,"skipped":32,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:19.181: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
... skipping 251 lines ...
Jun 22 16:18:21.393: INFO: Creating a PV followed by a PVC
Jun 22 16:18:21.484: INFO: Waiting for PV local-pvj758h to bind to PVC pvc-2xf79
Jun 22 16:18:21.485: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-2xf79] to have phase Bound
Jun 22 16:18:21.528: INFO: PersistentVolumeClaim pvc-2xf79 found and phase=Bound (42.978888ms)
Jun 22 16:18:21.528: INFO: Waiting up to 3m0s for PersistentVolume local-pvj758h to have phase Bound
Jun 22 16:18:21.571: INFO: PersistentVolume local-pvj758h found and phase=Bound (43.569216ms)
[It] should fail scheduling due to different NodeAffinity
test/e2e/storage/persistent_volumes-local.go:377
[1mSTEP[0m: local-volume-type: dir
Jun 22 16:18:21.706: INFO: Waiting up to 5m0s for pod "pod-f4b6d2d0-8b80-495b-a3b7-cf1591b32459" in namespace "persistent-local-volumes-test-463" to be "Unschedulable"
Jun 22 16:18:21.752: INFO: Pod "pod-f4b6d2d0-8b80-495b-a3b7-cf1591b32459": Phase="Pending", Reason="", readiness=false. Elapsed: 46.697386ms
Jun 22 16:18:21.753: INFO: Pod "pod-f4b6d2d0-8b80-495b-a3b7-cf1591b32459" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 14 lines ...
[32m• [SLOW TEST:21.951 seconds][0m
[sig-storage] PersistentVolumes-local
[90mtest/e2e/storage/utils/framework.go:23[0m
Pod with node different from PV's NodeAffinity
[90mtest/e2e/storage/persistent_volumes-local.go:349[0m
should fail scheduling due to different NodeAffinity
[90mtest/e2e/storage/persistent_volumes-local.go:377[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":6,"skipped":71,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:22.455: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 87 lines ...
[1mSTEP[0m: Building a namespace api object, basename downward-api
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
test/e2e/common/node/downwardapi.go:110
[1mSTEP[0m: Creating a pod to test downward api env vars
Jun 22 16:17:58.907: INFO: Waiting up to 5m0s for pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf" in namespace "downward-api-8629" to be "Succeeded or Failed"
Jun 22 16:17:58.952: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf": Phase="Pending", Reason="", readiness=false. Elapsed: 44.948325ms
Jun 22 16:18:00.998: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09027245s
Jun 22 16:18:02.997: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089947925s
Jun 22 16:18:04.998: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091001916s
Jun 22 16:18:06.996: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089231715s
Jun 22 16:18:08.999: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.091700815s
... skipping 2 lines ...
Jun 22 16:18:14.996: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.08851981s
Jun 22 16:18:16.998: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.090416307s
Jun 22 16:18:18.998: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.090647598s
Jun 22 16:18:20.997: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.089975298s
Jun 22 16:18:23.027: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.119795582s
[1mSTEP[0m: Saw pod success
Jun 22 16:18:23.027: INFO: Pod "downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf" satisfied condition "Succeeded or Failed"
Jun 22 16:18:23.078: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:18:23.271: INFO: Waiting for pod downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf to disappear
Jun 22 16:18:23.324: INFO: Pod downward-api-47df8272-5f21-4c8e-9f4b-09d8bbd005cf no longer exists
[AfterEach] [sig-node] Downward API
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:24.912 seconds][0m
[sig-node] Downward API
[90mtest/e2e/common/node/framework.go:23[0m
should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
[90mtest/e2e/common/node/downwardapi.go:110[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":4,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:23.472: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 16 lines ...
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:16:15.210: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename cronjob
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should delete failed finished jobs with limit of one job
test/e2e/apps/cronjob.go:291
[1mSTEP[0m: Creating an AllowConcurrent cronjob with custom history limit
[1mSTEP[0m: Ensuring a finished job exists
[1mSTEP[0m: Ensuring a finished job exists by listing jobs explicitly
[1mSTEP[0m: Ensuring this job and its pods does not exist anymore
[1mSTEP[0m: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
[1mSTEP[0m: Destroying namespace "cronjob-8279" for this suite.
[32m• [SLOW TEST:128.789 seconds][0m
[sig-apps] CronJob
[90mtest/e2e/apps/framework.go:23[0m
should delete failed finished jobs with limit of one job
[90mtest/e2e/apps/cronjob.go:291[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":2,"skipped":26,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:24.027: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 128 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating secret with name secret-test-5550f911-f17d-486b-8767-ba21700e7109
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 16:18:19.795: INFO: Waiting up to 5m0s for pod "pod-secrets-26e63aef-4081-4deb-aabb-ff8228cd0d8f" in namespace "secrets-2686" to be "Succeeded or Failed"
Jun 22 16:18:19.838: INFO: Pod "pod-secrets-26e63aef-4081-4deb-aabb-ff8228cd0d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 43.045412ms
Jun 22 16:18:21.883: INFO: Pod "pod-secrets-26e63aef-4081-4deb-aabb-ff8228cd0d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087598209s
Jun 22 16:18:23.883: INFO: Pod "pod-secrets-26e63aef-4081-4deb-aabb-ff8228cd0d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087902941s
Jun 22 16:18:25.885: INFO: Pod "pod-secrets-26e63aef-4081-4deb-aabb-ff8228cd0d8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090173798s
[1mSTEP[0m: Saw pod success
Jun 22 16:18:25.885: INFO: Pod "pod-secrets-26e63aef-4081-4deb-aabb-ff8228cd0d8f" satisfied condition "Succeeded or Failed"
Jun 22 16:18:25.930: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-secrets-26e63aef-4081-4deb-aabb-ff8228cd0d8f container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:18:26.038: INFO: Waiting for pod pod-secrets-26e63aef-4081-4deb-aabb-ff8228cd0d8f to disappear
Jun 22 16:18:26.081: INFO: Pod pod-secrets-26e63aef-4081-4deb-aabb-ff8228cd0d8f no longer exists
[AfterEach] [sig-storage] Secrets
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.782 seconds][0m
[sig-storage] Secrets
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":73,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 3 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:447
Jun 22 16:17:15.280: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 16:17:15.388: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5718" in namespace "provisioning-5718" to be "Succeeded or Failed"
Jun 22 16:17:15.433: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 44.966208ms
Jun 22 16:17:17.477: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08958136s
Jun 22 16:17:19.482: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094455766s
Jun 22 16:17:21.478: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090234503s
Jun 22 16:17:23.480: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092012906s
[1mSTEP[0m: Saw pod success
Jun 22 16:17:23.480: INFO: Pod "hostpath-symlink-prep-provisioning-5718" satisfied condition "Succeeded or Failed"
Jun 22 16:17:23.480: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5718" in namespace "provisioning-5718"
Jun 22 16:17:23.535: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5718" to be fully deleted
Jun 22 16:17:23.581: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-z2hb
Jun 22 16:17:23.629: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-z2hb" in namespace "provisioning-5718" to be "running"
Jun 22 16:17:23.675: INFO: Pod "pod-subpath-test-inlinevolume-z2hb": Phase="Pending", Reason="", readiness=false. Elapsed: 45.469342ms
... skipping 11 lines ...
Jun 22 16:17:40.272: INFO: stdout: ""
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-z2hb
Jun 22 16:17:40.272: INFO: Deleting pod "pod-subpath-test-inlinevolume-z2hb" in namespace "provisioning-5718"
Jun 22 16:17:40.332: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-z2hb" to be fully deleted
[1mSTEP[0m: Deleting pod
Jun 22 16:17:56.425: INFO: Deleting pod "pod-subpath-test-inlinevolume-z2hb" in namespace "provisioning-5718"
Jun 22 16:17:56.520: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5718" in namespace "provisioning-5718" to be "Succeeded or Failed"
Jun 22 16:17:56.564: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 43.914888ms
Jun 22 16:17:58.618: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098538825s
Jun 22 16:18:00.610: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090032049s
Jun 22 16:18:02.610: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090219772s
Jun 22 16:18:04.609: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089454438s
Jun 22 16:18:06.609: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08880151s
... skipping 5 lines ...
Jun 22 16:18:18.609: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 22.088774448s
Jun 22 16:18:20.608: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 24.088321988s
Jun 22 16:18:22.611: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 26.09096982s
Jun 22 16:18:24.610: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Pending", Reason="", readiness=false. Elapsed: 28.090160573s
Jun 22 16:18:26.614: INFO: Pod "hostpath-symlink-prep-provisioning-5718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.094433752s
[1mSTEP[0m: Saw pod success
Jun 22 16:18:26.614: INFO: Pod "hostpath-symlink-prep-provisioning-5718" satisfied condition "Succeeded or Failed"
Jun 22 16:18:26.614: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5718" in namespace "provisioning-5718"
Jun 22 16:18:26.667: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5718" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
Jun 22 16:18:26.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-5718" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:18:26.847: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename custom-resource-definition
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:18:27.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "custom-resource-definition-3565" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Discovery
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 10 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:18:29.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "discovery-228" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Discovery should accurately determine present and missing resources","total":-1,"completed":4,"skipped":25,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 9 lines ...
Jun 22 16:18:01.177: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-1997 create -f -'
Jun 22 16:18:02.321: INFO: stderr: ""
Jun 22 16:18:02.321: INFO: stdout: "pod/httpd created\n"
Jun 22 16:18:02.321: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 22 16:18:02.322: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-1997" to be "running and ready"
Jun 22 16:18:02.367: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 45.106791ms
Jun 22 16:18:02.367: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:04.414: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092083244s
Jun 22 16:18:04.414: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:06.415: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093268999s
Jun 22 16:18:06.415: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:08.413: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091179724s
Jun 22 16:18:08.413: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:10.416: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094634978s
Jun 22 16:18:10.416: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:12.415: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.092997182s
Jun 22 16:18:12.415: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:14.437: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.114962438s
Jun 22 16:18:14.437: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:16.416: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.093982009s
Jun 22 16:18:16.416: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:18.414: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.092423326s
Jun 22 16:18:18.414: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:20.414: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.09286309s
Jun 22 16:18:20.414: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:22.413: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.091834338s
Jun 22 16:18:22.413: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:24.415: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.093114856s
Jun 22 16:18:24.415: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:26.414: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.092555095s
Jun 22 16:18:26.414: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:28.424: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.101973576s
Jun 22 16:18:28.424: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:18:30.415: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 28.093585635s
Jun 22 16:18:30.415: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 22 16:18:30.415: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] execing into a container with a successful command
test/e2e/kubectl/kubectl.go:528
Jun 22 16:18:30.415: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-1997 exec httpd --pod-running-timeout=2m0s -- /bin/sh -c exit 0'
... skipping 24 lines ...
[90mtest/e2e/kubectl/kubectl.go:407[0m
should return command exit codes
[90mtest/e2e/kubectl/kubectl.go:527[0m
execing into a container with a successful command
[90mtest/e2e/kubectl/kubectl.go:528[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command","total":-1,"completed":3,"skipped":38,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:32.269: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 30 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:18:32.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "request-timeout-6325" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":4,"skipped":49,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:32.807: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 569 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] provisioning
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should provision storage with pvc data source
[90mtest/e2e/storage/testsuites/provisioning.go:428[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":7,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:34.899: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/framework/framework.go:187
... skipping 99 lines ...
[90mtest/e2e/storage/testsuites/topology.go:166[0m
[36mOnly supported for providers [openstack] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1092
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":9,"skipped":115,"failed":0}
[BeforeEach] [sig-node] Security Context
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:18:14.519: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
test/e2e/node/security_context.go:178
[1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 22 16:18:14.886: INFO: Waiting up to 5m0s for pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c" in namespace "security-context-8480" to be "Succeeded or Failed"
Jun 22 16:18:14.929: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c": Phase="Pending", Reason="", readiness=false. Elapsed: 43.360158ms
Jun 22 16:18:16.973: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087021389s
Jun 22 16:18:18.975: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088740049s
Jun 22 16:18:20.973: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087298967s
Jun 22 16:18:22.985: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099094229s
Jun 22 16:18:24.974: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.087748779s
Jun 22 16:18:26.973: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.087302595s
Jun 22 16:18:28.982: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.096138387s
Jun 22 16:18:30.974: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.087695087s
Jun 22 16:18:32.973: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.086519377s
Jun 22 16:18:34.986: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.09968253s
[1mSTEP[0m: Saw pod success
Jun 22 16:18:34.986: INFO: Pod "security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c" satisfied condition "Succeeded or Failed"
Jun 22 16:18:35.029: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:18:35.128: INFO: Waiting for pod security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c to disappear
Jun 22 16:18:35.171: INFO: Pod security-context-65f3394f-5b3c-4c42-89dc-67629bf2192c no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:20.764 seconds][0m
[sig-node] Security Context
[90mtest/e2e/node/framework.go:23[0m
should support seccomp runtime/default [LinuxOnly]
[90mtest/e2e/node/security_context.go:178[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":10,"skipped":115,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 179 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (block volmode)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":54,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:35.507: INFO: Only supported for providers [aws] (not gce)
... skipping 83 lines ...
[32m• [SLOW TEST:43.462 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should update annotations on modification [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":46,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Probing container
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 26 lines ...
[32m• [SLOW TEST:62.083 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":57,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:39.080: INFO: Only supported for providers [azure] (not gce)
... skipping 320 lines ...
Jun 22 16:18:27.155: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:27.155: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/hostexec-nodes-us-west4-a-z5t6-wnxwd/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-8398%22%2Fhost%3B+mount+-t+tmpfs+e2e-mount-propagation-host+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-8398%22%2Fhost%3B+echo+host+%3E+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-8398%22%2Fhost%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 22 16:18:27.533: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8398 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:27.533: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:27.534: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:27.534: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:27.854: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Jun 22 16:18:27.907: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8398 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:27.907: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:27.908: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:27.908: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:28.233: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:28.280: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8398 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:28.280: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:28.281: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:28.281: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:28.631: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:28.678: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8398 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:28.678: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:28.679: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:28.679: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:29.060: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:29.116: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8398 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:29.116: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:29.117: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:29.117: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:29.489: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Jun 22 16:18:29.534: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8398 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:29.534: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:29.535: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:29.535: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:29.934: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Jun 22 16:18:29.978: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8398 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:29.978: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:29.979: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:29.979: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:30.315: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Jun 22 16:18:30.361: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8398 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:30.361: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:30.362: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:30.362: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:30.744: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:30.790: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8398 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:30.790: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:30.791: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:30.791: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:31.169: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:31.212: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8398 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:31.212: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:31.213: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:31.213: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:31.538: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Jun 22 16:18:31.581: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8398 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:31.581: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:31.582: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:31.582: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:32.023: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:32.067: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8398 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:32.067: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:32.068: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:32.068: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:32.500: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:32.552: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8398 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:32.552: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:32.553: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:32.553: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:32.904: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Jun 22 16:18:32.949: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8398 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:32.949: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:32.950: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:32.950: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:33.345: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:33.388: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8398 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:33.388: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:33.389: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:33.389: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:33.799: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:33.843: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8398 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:33.843: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:33.844: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:33.844: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:34.224: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:34.272: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8398 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:34.272: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:34.273: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:34.273: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:34.631: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:34.674: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8398 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:34.674: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:34.675: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:34.675: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:34.988: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:35.031: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8398 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:35.031: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:35.032: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:35.032: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:35.376: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Jun 22 16:18:35.420: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8398 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 22 16:18:35.420: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:35.420: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:35.420: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 22 16:18:35.752: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jun 22 16:18:35.752: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c pidof kubelet] Namespace:mount-propagation-8398 PodName:hostexec-nodes-us-west4-a-z5t6-wnxwd ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jun 22 16:18:35.752: INFO: >>> kubeConfig: /root/.kube/config
Jun 22 16:18:35.752: INFO: ExecWithOptions: Clientset creation
Jun 22 16:18:35.753: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/mount-propagation-8398/pods/hostexec-nodes-us-west4-a-z5t6-wnxwd/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=pidof+kubelet&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 22 16:18:36.105: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 4989 -m cat "/var/lib/kubelet/mount-propagation-8398/host/file"] Namespace:mount-propagation-8398 PodName:hostexec-nodes-us-west4-a-z5t6-wnxwd ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jun 22 16:18:36.105: INFO: >>> kubeConfig: /root/.kube/config
... skipping 53 lines ...
[32m• [SLOW TEST:54.008 seconds][0m
[sig-node] Mount propagation
[90mtest/e2e/node/framework.go:23[0m
should propagate mounts within defined scopes
[90mtest/e2e/node/mount_propagation.go:85[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts within defined scopes","total":-1,"completed":8,"skipped":63,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:40.191: INFO: Only supported for providers [azure] (not gce)
... skipping 47 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium
Jun 22 16:18:35.362: INFO: Waiting up to 5m0s for pod "pod-a68b73f0-d698-48b3-8fb6-7869af411fe9" in namespace "emptydir-2813" to be "Succeeded or Failed"
Jun 22 16:18:35.407: INFO: Pod "pod-a68b73f0-d698-48b3-8fb6-7869af411fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 45.25155ms
Jun 22 16:18:37.457: INFO: Pod "pod-a68b73f0-d698-48b3-8fb6-7869af411fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094907432s
Jun 22 16:18:39.454: INFO: Pod "pod-a68b73f0-d698-48b3-8fb6-7869af411fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09190231s
Jun 22 16:18:41.454: INFO: Pod "pod-a68b73f0-d698-48b3-8fb6-7869af411fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091912661s
Jun 22 16:18:43.454: INFO: Pod "pod-a68b73f0-d698-48b3-8fb6-7869af411fe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092215125s
[1mSTEP[0m: Saw pod success
Jun 22 16:18:43.454: INFO: Pod "pod-a68b73f0-d698-48b3-8fb6-7869af411fe9" satisfied condition "Succeeded or Failed"
Jun 22 16:18:43.500: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-a68b73f0-d698-48b3-8fb6-7869af411fe9 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:18:43.603: INFO: Waiting for pod pod-a68b73f0-d698-48b3-8fb6-7869af411fe9 to disappear
Jun 22 16:18:43.648: INFO: Pod pod-a68b73f0-d698-48b3-8fb6-7869af411fe9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.769 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":47,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] DisruptionController
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 16 lines ...
[32m• [SLOW TEST:14.793 seconds][0m
[sig-apps] DisruptionController
[90mtest/e2e/apps/framework.go:23[0m
evictions: too few pods, absolute => should not allow an eviction
[90mtest/e2e/apps/disruption.go:289[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":5,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:44.045: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 156 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-test-volume-d7aaa424-2008-4453-b944-a6558ef70db5
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 16:18:35.972: INFO: Waiting up to 5m0s for pod "pod-configmaps-1443c35f-6a74-4ba6-9b55-3ca3f88b29b9" in namespace "configmap-6725" to be "Succeeded or Failed"
Jun 22 16:18:36.020: INFO: Pod "pod-configmaps-1443c35f-6a74-4ba6-9b55-3ca3f88b29b9": Phase="Pending", Reason="", readiness=false. Elapsed: 48.091656ms
Jun 22 16:18:38.069: INFO: Pod "pod-configmaps-1443c35f-6a74-4ba6-9b55-3ca3f88b29b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096616375s
Jun 22 16:18:40.070: INFO: Pod "pod-configmaps-1443c35f-6a74-4ba6-9b55-3ca3f88b29b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097631242s
Jun 22 16:18:42.070: INFO: Pod "pod-configmaps-1443c35f-6a74-4ba6-9b55-3ca3f88b29b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098142944s
Jun 22 16:18:44.069: INFO: Pod "pod-configmaps-1443c35f-6a74-4ba6-9b55-3ca3f88b29b9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096583138s
Jun 22 16:18:46.069: INFO: Pod "pod-configmaps-1443c35f-6a74-4ba6-9b55-3ca3f88b29b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096674391s
[1mSTEP[0m: Saw pod success
Jun 22 16:18:46.069: INFO: Pod "pod-configmaps-1443c35f-6a74-4ba6-9b55-3ca3f88b29b9" satisfied condition "Succeeded or Failed"
Jun 22 16:18:46.120: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-configmaps-1443c35f-6a74-4ba6-9b55-3ca3f88b29b9 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:18:46.231: INFO: Waiting for pod pod-configmaps-1443c35f-6a74-4ba6-9b55-3ca3f88b29b9 to disappear
Jun 22 16:18:46.282: INFO: Pod pod-configmaps-1443c35f-6a74-4ba6-9b55-3ca3f88b29b9 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.922 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":57,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:46.461: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 103 lines ...
[90mtest/e2e/kubectl/portforward.go:454[0m
that expects a client request
[90mtest/e2e/kubectl/portforward.go:455[0m
should support a client that connects, sends NO DATA, and disconnects
[90mtest/e2e/kubectl/portforward.go:456[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":6,"skipped":46,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:46.589: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 109 lines ...
Jun 22 16:18:46.671: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename volume-provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
test/e2e/storage/volume_provisioning.go:146
[It] should report an error and create no PV
test/e2e/storage/volume_provisioning.go:743
Jun 22 16:18:47.019: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [sig-storage] Dynamic Provisioning
test/e2e/framework/framework.go:187
Jun 22 16:18:47.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-provisioning-7184" for this suite.
[36m[1mS [SKIPPING] [0.457 seconds][0m
[sig-storage] Dynamic Provisioning
[90mtest/e2e/storage/utils/framework.go:23[0m
Invalid AWS KMS key
[90mtest/e2e/storage/volume_provisioning.go:742[0m
[36m[1mshould report an error and create no PV [It][0m
[90mtest/e2e/storage/volume_provisioning.go:743[0m
[36mOnly supported for providers [aws] (not gce)[0m
test/e2e/storage/volume_provisioning.go:744
[90m------------------------------[0m
... skipping 72 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating secret with name secret-test-601d7d75-539d-4194-bb26-22770d90e4d4
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 16:18:40.829: INFO: Waiting up to 5m0s for pod "pod-secrets-21954768-6af7-414b-9f92-f2bff74f3703" in namespace "secrets-7181" to be "Succeeded or Failed"
Jun 22 16:18:40.877: INFO: Pod "pod-secrets-21954768-6af7-414b-9f92-f2bff74f3703": Phase="Pending", Reason="", readiness=false. Elapsed: 47.90011ms
Jun 22 16:18:42.923: INFO: Pod "pod-secrets-21954768-6af7-414b-9f92-f2bff74f3703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093469143s
Jun 22 16:18:44.921: INFO: Pod "pod-secrets-21954768-6af7-414b-9f92-f2bff74f3703": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091951388s
Jun 22 16:18:46.921: INFO: Pod "pod-secrets-21954768-6af7-414b-9f92-f2bff74f3703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092135734s
[1mSTEP[0m: Saw pod success
Jun 22 16:18:46.921: INFO: Pod "pod-secrets-21954768-6af7-414b-9f92-f2bff74f3703" satisfied condition "Succeeded or Failed"
Jun 22 16:18:46.968: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-secrets-21954768-6af7-414b-9f92-f2bff74f3703 container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:18:47.093: INFO: Waiting for pod pod-secrets-21954768-6af7-414b-9f92-f2bff74f3703 to disappear
Jun 22 16:18:47.136: INFO: Pod pod-secrets-21954768-6af7-414b-9f92-f2bff74f3703 no longer exists
[AfterEach] [sig-storage] Secrets
test/e2e/framework/framework.go:187
... skipping 5 lines ...
[32m• [SLOW TEST:7.041 seconds][0m
[sig-storage] Secrets
[90mtest/e2e/common/storage/framework.go:23[0m
should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":71,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] Deployment
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 196 lines ...
[32m• [SLOW TEST:34.051 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
deployment should support proportional scaling [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:49.615: INFO: Only supported for providers [aws] (not gce)
... skipping 70 lines ...
[32m• [SLOW TEST:7.793 seconds][0m
[sig-apps] ReplicationController
[90mtest/e2e/apps/framework.go:23[0m
should serve a basic image on each replica with a public image [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":9,"skipped":49,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:51.597: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 93 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with configmap pod [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-configmap-qstp
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 16:18:23.050: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qstp" in namespace "subpath-5383" to be "Succeeded or Failed"
Jun 22 16:18:23.148: INFO: Pod "pod-subpath-test-configmap-qstp": Phase="Pending", Reason="", readiness=false. Elapsed: 97.67166ms
Jun 22 16:18:25.193: INFO: Pod "pod-subpath-test-configmap-qstp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142958952s
Jun 22 16:18:27.197: INFO: Pod "pod-subpath-test-configmap-qstp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146546113s
Jun 22 16:18:29.206: INFO: Pod "pod-subpath-test-configmap-qstp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155371724s
Jun 22 16:18:31.197: INFO: Pod "pod-subpath-test-configmap-qstp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146877595s
Jun 22 16:18:33.193: INFO: Pod "pod-subpath-test-configmap-qstp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.142720267s
... skipping 7 lines ...
Jun 22 16:18:49.210: INFO: Pod "pod-subpath-test-configmap-qstp": Phase="Running", Reason="", readiness=true. Elapsed: 26.159220148s
Jun 22 16:18:51.216: INFO: Pod "pod-subpath-test-configmap-qstp": Phase="Running", Reason="", readiness=true. Elapsed: 28.165666653s
Jun 22 16:18:53.193: INFO: Pod "pod-subpath-test-configmap-qstp": Phase="Running", Reason="", readiness=true. Elapsed: 30.142140766s
Jun 22 16:18:55.196: INFO: Pod "pod-subpath-test-configmap-qstp": Phase="Running", Reason="", readiness=true. Elapsed: 32.145216129s
Jun 22 16:18:57.192: INFO: Pod "pod-subpath-test-configmap-qstp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.141470167s
[1mSTEP[0m: Saw pod success
Jun 22 16:18:57.192: INFO: Pod "pod-subpath-test-configmap-qstp" satisfied condition "Succeeded or Failed"
Jun 22 16:18:57.235: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-subpath-test-configmap-qstp container test-container-subpath-configmap-qstp: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:18:57.333: INFO: Waiting for pod pod-subpath-test-configmap-qstp to disappear
Jun 22 16:18:57.378: INFO: Pod pod-subpath-test-configmap-qstp no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-configmap-qstp
Jun 22 16:18:57.378: INFO: Deleting pod "pod-subpath-test-configmap-qstp" in namespace "subpath-5383"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with configmap pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]","total":-1,"completed":7,"skipped":89,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:57.536: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 229 lines ...
[32m• [SLOW TEST:87.442 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
iterative rollouts should eventually progress
[90mtest/e2e/apps/deployment.go:135[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":5,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:18:59.047: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/framework/framework.go:187
... skipping 113 lines ...
[32m• [SLOW TEST:15.978 seconds][0m
[sig-apps] Job
[90mtest/e2e/apps/framework.go:23[0m
should adopt matching orphans and release non-matching pods [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":6,"skipped":30,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 22 lines ...
Jun 22 16:18:46.634: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Setting timeout (1s) shorter than webhook latency (5s)
[1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API
[1mSTEP[0m: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
[1mSTEP[0m: Having no error when timeout is shorter than webhook latency and failure policy is ignore
[1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API
[1mSTEP[0m: Having no error when timeout is longer than webhook latency
[1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API
[1mSTEP[0m: Having no error when timeout is empty (defaulted to 10s in v1)
[1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:187
Jun 22 16:18:59.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "webhook-7868" for this suite.
[1mSTEP[0m: Destroying namespace "webhook-7868-markers" for this suite.
... skipping 4 lines ...
[32m• [SLOW TEST:35.618 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should honor timeout [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":3,"skipped":42,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:00.183: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 29 lines ...
[It] should support existing directory
test/e2e/storage/testsuites/subpath.go:207
Jun 22 16:18:50.013: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 22 16:18:50.013: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-7mkw
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:18:50.063: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7mkw" in namespace "provisioning-791" to be "Succeeded or Failed"
Jun 22 16:18:50.110: INFO: Pod "pod-subpath-test-inlinevolume-7mkw": Phase="Pending", Reason="", readiness=false. Elapsed: 47.009266ms
Jun 22 16:18:52.185: INFO: Pod "pod-subpath-test-inlinevolume-7mkw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12239963s
Jun 22 16:18:54.160: INFO: Pod "pod-subpath-test-inlinevolume-7mkw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097669785s
Jun 22 16:18:56.159: INFO: Pod "pod-subpath-test-inlinevolume-7mkw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096510258s
Jun 22 16:18:58.157: INFO: Pod "pod-subpath-test-inlinevolume-7mkw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09445411s
Jun 22 16:19:00.156: INFO: Pod "pod-subpath-test-inlinevolume-7mkw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093668315s
Jun 22 16:19:02.157: INFO: Pod "pod-subpath-test-inlinevolume-7mkw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.094489298s
Jun 22 16:19:04.162: INFO: Pod "pod-subpath-test-inlinevolume-7mkw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.099827737s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:04.163: INFO: Pod "pod-subpath-test-inlinevolume-7mkw" satisfied condition "Succeeded or Failed"
Jun 22 16:19:04.220: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod pod-subpath-test-inlinevolume-7mkw container test-container-volume-inlinevolume-7mkw: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:04.341: INFO: Waiting for pod pod-subpath-test-inlinevolume-7mkw to disappear
Jun 22 16:19:04.386: INFO: Pod pod-subpath-test-inlinevolume-7mkw no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-7mkw
Jun 22 16:19:04.386: INFO: Deleting pod "pod-subpath-test-inlinevolume-7mkw" in namespace "provisioning-791"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":51,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:04.612: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 43 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0666 on tmpfs
Jun 22 16:18:57.961: INFO: Waiting up to 5m0s for pod "pod-72781f53-7f5b-47d7-a44e-390cd43496e8" in namespace "emptydir-4948" to be "Succeeded or Failed"
Jun 22 16:18:58.005: INFO: Pod "pod-72781f53-7f5b-47d7-a44e-390cd43496e8": Phase="Pending", Reason="", readiness=false. Elapsed: 43.441785ms
Jun 22 16:19:00.050: INFO: Pod "pod-72781f53-7f5b-47d7-a44e-390cd43496e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088875008s
Jun 22 16:19:02.054: INFO: Pod "pod-72781f53-7f5b-47d7-a44e-390cd43496e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092700194s
Jun 22 16:19:04.054: INFO: Pod "pod-72781f53-7f5b-47d7-a44e-390cd43496e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092944999s
Jun 22 16:19:06.052: INFO: Pod "pod-72781f53-7f5b-47d7-a44e-390cd43496e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090476557s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:06.052: INFO: Pod "pod-72781f53-7f5b-47d7-a44e-390cd43496e8" satisfied condition "Succeeded or Failed"
Jun 22 16:19:06.097: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod pod-72781f53-7f5b-47d7-a44e-390cd43496e8 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:06.195: INFO: Waiting for pod pod-72781f53-7f5b-47d7-a44e-390cd43496e8 to disappear
Jun 22 16:19:06.239: INFO: Pod pod-72781f53-7f5b-47d7-a44e-390cd43496e8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.730 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":103,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:19:00.199: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium
Jun 22 16:19:00.576: INFO: Waiting up to 5m0s for pod "pod-2dd9f8b4-e6b4-44bc-913b-6a123bf43d1f" in namespace "emptydir-5966" to be "Succeeded or Failed"
Jun 22 16:19:00.622: INFO: Pod "pod-2dd9f8b4-e6b4-44bc-913b-6a123bf43d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.301563ms
Jun 22 16:19:02.670: INFO: Pod "pod-2dd9f8b4-e6b4-44bc-913b-6a123bf43d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094111337s
Jun 22 16:19:04.671: INFO: Pod "pod-2dd9f8b4-e6b4-44bc-913b-6a123bf43d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095229261s
Jun 22 16:19:06.674: INFO: Pod "pod-2dd9f8b4-e6b4-44bc-913b-6a123bf43d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097970756s
Jun 22 16:19:08.671: INFO: Pod "pod-2dd9f8b4-e6b4-44bc-913b-6a123bf43d1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095428183s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:08.671: INFO: Pod "pod-2dd9f8b4-e6b4-44bc-913b-6a123bf43d1f" satisfied condition "Succeeded or Failed"
Jun 22 16:19:08.717: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-2dd9f8b4-e6b4-44bc-913b-6a123bf43d1f container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:08.819: INFO: Waiting for pod pod-2dd9f8b4-e6b4-44bc-913b-6a123bf43d1f to disappear
Jun 22 16:19:08.865: INFO: Pod pod-2dd9f8b4-e6b4-44bc-913b-6a123bf43d1f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.779 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":48,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:09.007: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
Jun 22 16:18:35.929: INFO: PersistentVolumeClaim pvc-phwkd found but phase is Pending instead of Bound.
Jun 22 16:18:37.973: INFO: PersistentVolumeClaim pvc-phwkd found and phase=Bound (14.38061747s)
Jun 22 16:18:37.973: INFO: Waiting up to 3m0s for PersistentVolume local-s4spz to have phase Bound
Jun 22 16:18:38.016: INFO: PersistentVolume local-s4spz found and phase=Bound (43.148877ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-xrfk
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:18:38.146: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xrfk" in namespace "provisioning-2853" to be "Succeeded or Failed"
Jun 22 16:18:38.188: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 42.041701ms
Jun 22 16:18:40.238: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091350866s
Jun 22 16:18:42.232: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086233157s
Jun 22 16:18:44.232: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086147242s
Jun 22 16:18:46.233: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086702361s
Jun 22 16:18:48.235: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.088455096s
Jun 22 16:18:50.233: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.086305593s
Jun 22 16:18:52.238: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.092228192s
Jun 22 16:18:54.231: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 16.084867809s
Jun 22 16:18:56.233: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 18.086300038s
Jun 22 16:18:58.232: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 20.086121344s
Jun 22 16:19:00.233: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.087126263s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:00.233: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk" satisfied condition "Succeeded or Failed"
Jun 22 16:19:00.284: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-subpath-test-preprovisionedpv-xrfk container test-container-subpath-preprovisionedpv-xrfk: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:00.413: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xrfk to disappear
Jun 22 16:19:00.456: INFO: Pod pod-subpath-test-preprovisionedpv-xrfk no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-xrfk
Jun 22 16:19:00.456: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xrfk" in namespace "provisioning-2853"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-xrfk
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:19:00.555: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xrfk" in namespace "provisioning-2853" to be "Succeeded or Failed"
Jun 22 16:19:00.597: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 42.698413ms
Jun 22 16:19:02.641: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08603286s
Jun 22 16:19:04.640: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085712397s
Jun 22 16:19:06.640: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085398892s
Jun 22 16:19:08.648: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093190679s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:08.648: INFO: Pod "pod-subpath-test-preprovisionedpv-xrfk" satisfied condition "Succeeded or Failed"
Jun 22 16:19:08.691: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod pod-subpath-test-preprovisionedpv-xrfk container test-container-subpath-preprovisionedpv-xrfk: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:08.786: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xrfk to disappear
Jun 22 16:19:08.831: INFO: Pod pod-subpath-test-preprovisionedpv-xrfk no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-xrfk
Jun 22 16:19:08.831: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xrfk" in namespace "provisioning-2853"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":41,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:09.555: INFO: Only supported for providers [aws] (not gce)
... skipping 28 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with projected pod [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-projected-vttv
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 16:18:37.754: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vttv" in namespace "subpath-537" to be "Succeeded or Failed"
Jun 22 16:18:37.810: INFO: Pod "pod-subpath-test-projected-vttv": Phase="Pending", Reason="", readiness=false. Elapsed: 55.361091ms
Jun 22 16:18:39.863: INFO: Pod "pod-subpath-test-projected-vttv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108985314s
Jun 22 16:18:41.859: INFO: Pod "pod-subpath-test-projected-vttv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105049706s
Jun 22 16:18:43.860: INFO: Pod "pod-subpath-test-projected-vttv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105625355s
Jun 22 16:18:45.866: INFO: Pod "pod-subpath-test-projected-vttv": Phase="Running", Reason="", readiness=true. Elapsed: 8.111538997s
Jun 22 16:18:47.860: INFO: Pod "pod-subpath-test-projected-vttv": Phase="Running", Reason="", readiness=true. Elapsed: 10.106002304s
... skipping 6 lines ...
Jun 22 16:19:01.861: INFO: Pod "pod-subpath-test-projected-vttv": Phase="Running", Reason="", readiness=true. Elapsed: 24.106705378s
Jun 22 16:19:03.859: INFO: Pod "pod-subpath-test-projected-vttv": Phase="Running", Reason="", readiness=true. Elapsed: 26.104932319s
Jun 22 16:19:05.865: INFO: Pod "pod-subpath-test-projected-vttv": Phase="Running", Reason="", readiness=true. Elapsed: 28.110872396s
Jun 22 16:19:07.861: INFO: Pod "pod-subpath-test-projected-vttv": Phase="Running", Reason="", readiness=true. Elapsed: 30.106658722s
Jun 22 16:19:09.859: INFO: Pod "pod-subpath-test-projected-vttv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.104245387s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:09.859: INFO: Pod "pod-subpath-test-projected-vttv" satisfied condition "Succeeded or Failed"
Jun 22 16:19:09.910: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-subpath-test-projected-vttv container test-container-subpath-projected-vttv: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:10.027: INFO: Waiting for pod pod-subpath-test-projected-vttv to disappear
Jun 22 16:19:10.078: INFO: Pod pod-subpath-test-projected-vttv no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-projected-vttv
Jun 22 16:19:10.078: INFO: Deleting pod "pod-subpath-test-projected-vttv" in namespace "subpath-537"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with projected pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]","total":-1,"completed":6,"skipped":47,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 106 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":70,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:13.309: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 37 lines ...
[36mDriver hostPath doesn't support GenericEphemeralVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":12,"skipped":37,"failed":0}
[BeforeEach] [sig-network] Networking
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:18:34.850: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename nettest
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 88 lines ...
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
test/e2e/node/security_context.go:79
[1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 22 16:19:00.462: INFO: Waiting up to 5m0s for pod "security-context-693440ed-0c17-4706-96e4-663ca2cff39c" in namespace "security-context-4969" to be "Succeeded or Failed"
Jun 22 16:19:00.509: INFO: Pod "security-context-693440ed-0c17-4706-96e4-663ca2cff39c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.433964ms
Jun 22 16:19:02.557: INFO: Pod "security-context-693440ed-0c17-4706-96e4-663ca2cff39c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094831745s
Jun 22 16:19:04.567: INFO: Pod "security-context-693440ed-0c17-4706-96e4-663ca2cff39c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104551129s
Jun 22 16:19:06.559: INFO: Pod "security-context-693440ed-0c17-4706-96e4-663ca2cff39c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097363003s
Jun 22 16:19:08.563: INFO: Pod "security-context-693440ed-0c17-4706-96e4-663ca2cff39c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100946262s
Jun 22 16:19:10.557: INFO: Pod "security-context-693440ed-0c17-4706-96e4-663ca2cff39c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.094857595s
Jun 22 16:19:12.559: INFO: Pod "security-context-693440ed-0c17-4706-96e4-663ca2cff39c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.096920568s
Jun 22 16:19:14.561: INFO: Pod "security-context-693440ed-0c17-4706-96e4-663ca2cff39c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.098871892s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:14.561: INFO: Pod "security-context-693440ed-0c17-4706-96e4-663ca2cff39c" satisfied condition "Succeeded or Failed"
Jun 22 16:19:14.608: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod security-context-693440ed-0c17-4706-96e4-663ca2cff39c container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:14.712: INFO: Waiting for pod security-context-693440ed-0c17-4706-96e4-663ca2cff39c to disappear
Jun 22 16:19:14.760: INFO: Pod security-context-693440ed-0c17-4706-96e4-663ca2cff39c no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:14.776 seconds][0m
[sig-node] Security Context
[90mtest/e2e/node/framework.go:23[0m
should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
[90mtest/e2e/node/security_context.go:79[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":7,"skipped":39,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:14.916: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 122 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSIStorageCapacity
[90mtest/e2e/storage/csi_mock_volume.go:1334[0m
CSIStorageCapacity used, no capacity
[90mtest/e2e/storage/csi_mock_volume.go:1377[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":7,"skipped":54,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Subpath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 5 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-configmap-nhhx
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 16:18:46.999: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nhhx" in namespace "subpath-1957" to be "Succeeded or Failed"
Jun 22 16:18:47.050: INFO: Pod "pod-subpath-test-configmap-nhhx": Phase="Pending", Reason="", readiness=false. Elapsed: 51.643666ms
Jun 22 16:18:49.128: INFO: Pod "pod-subpath-test-configmap-nhhx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129886269s
Jun 22 16:18:51.101: INFO: Pod "pod-subpath-test-configmap-nhhx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102389118s
Jun 22 16:18:53.099: INFO: Pod "pod-subpath-test-configmap-nhhx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100788372s
Jun 22 16:18:55.122: INFO: Pod "pod-subpath-test-configmap-nhhx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123021496s
Jun 22 16:18:57.109: INFO: Pod "pod-subpath-test-configmap-nhhx": Phase="Running", Reason="", readiness=true. Elapsed: 10.11032148s
... skipping 5 lines ...
Jun 22 16:19:09.115: INFO: Pod "pod-subpath-test-configmap-nhhx": Phase="Running", Reason="", readiness=true. Elapsed: 22.116292178s
Jun 22 16:19:11.104: INFO: Pod "pod-subpath-test-configmap-nhhx": Phase="Running", Reason="", readiness=true. Elapsed: 24.105471406s
Jun 22 16:19:13.101: INFO: Pod "pod-subpath-test-configmap-nhhx": Phase="Running", Reason="", readiness=true. Elapsed: 26.102538211s
Jun 22 16:19:15.104: INFO: Pod "pod-subpath-test-configmap-nhhx": Phase="Running", Reason="", readiness=true. Elapsed: 28.104941474s
Jun 22 16:19:17.104: INFO: Pod "pod-subpath-test-configmap-nhhx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.105586611s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:17.104: INFO: Pod "pod-subpath-test-configmap-nhhx" satisfied condition "Succeeded or Failed"
Jun 22 16:19:17.152: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-subpath-test-configmap-nhhx container test-container-subpath-configmap-nhhx: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:17.268: INFO: Waiting for pod pod-subpath-test-configmap-nhhx to disappear
Jun 22 16:19:17.316: INFO: Pod pod-subpath-test-configmap-nhhx no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-configmap-nhhx
Jun 22 16:19:17.317: INFO: Deleting pod "pod-subpath-test-configmap-nhhx" in namespace "subpath-1957"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with configmap pod with mountPath of existing file [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]","total":-1,"completed":8,"skipped":60,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:17.525: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 65 lines ...
[32m• [SLOW TEST:11.757 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
[90mtest/e2e/apimachinery/resource_quota.go:532[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":9,"skipped":106,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:18.136: INFO: Only supported for providers [azure] (not gce)
... skipping 69 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
Jun 22 16:19:09.403: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-d1b6bf1e-b9bf-4dca-8945-ea01e467f3b2" in namespace "security-context-test-4243" to be "Succeeded or Failed"
Jun 22 16:19:09.451: INFO: Pod "busybox-readonly-false-d1b6bf1e-b9bf-4dca-8945-ea01e467f3b2": Phase="Pending", Reason="", readiness=false. Elapsed: 47.782711ms
Jun 22 16:19:11.499: INFO: Pod "busybox-readonly-false-d1b6bf1e-b9bf-4dca-8945-ea01e467f3b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095169219s
Jun 22 16:19:13.497: INFO: Pod "busybox-readonly-false-d1b6bf1e-b9bf-4dca-8945-ea01e467f3b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093993585s
Jun 22 16:19:15.501: INFO: Pod "busybox-readonly-false-d1b6bf1e-b9bf-4dca-8945-ea01e467f3b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097940349s
Jun 22 16:19:17.502: INFO: Pod "busybox-readonly-false-d1b6bf1e-b9bf-4dca-8945-ea01e467f3b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098760875s
Jun 22 16:19:19.499: INFO: Pod "busybox-readonly-false-d1b6bf1e-b9bf-4dca-8945-ea01e467f3b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095835516s
Jun 22 16:19:19.499: INFO: Pod "busybox-readonly-false-d1b6bf1e-b9bf-4dca-8945-ea01e467f3b2" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 22 16:19:19.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-4243" for this suite.
... skipping 2 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
When creating a pod with readOnlyRootFilesystem
[90mtest/e2e/common/node/security_context.go:173[0m
should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":53,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
... skipping 140 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support two pods which have the same volume definition
[90mtest/e2e/storage/testsuites/ephemeral.go:277[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":7,"skipped":63,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:22.650: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 114 lines ...
[32m• [SLOW TEST:6.929 seconds][0m
[sig-apps] ReplicaSet
[90mtest/e2e/apps/framework.go:23[0m
should validate Replicaset Status endpoints [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":8,"skipped":63,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 3 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
test/e2e/storage/testsuites/subpath.go:367
Jun 22 16:19:14.653: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 16:19:14.751: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7593" in namespace "provisioning-7593" to be "Succeeded or Failed"
Jun 22 16:19:14.799: INFO: Pod "hostpath-symlink-prep-provisioning-7593": Phase="Pending", Reason="", readiness=false. Elapsed: 48.099315ms
Jun 22 16:19:16.847: INFO: Pod "hostpath-symlink-prep-provisioning-7593": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095203767s
Jun 22 16:19:18.845: INFO: Pod "hostpath-symlink-prep-provisioning-7593": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094079649s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:18.845: INFO: Pod "hostpath-symlink-prep-provisioning-7593" satisfied condition "Succeeded or Failed"
Jun 22 16:19:18.846: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7593" in namespace "provisioning-7593"
Jun 22 16:19:18.898: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7593" to be fully deleted
Jun 22 16:19:18.943: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-qzz8
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:19:18.992: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qzz8" in namespace "provisioning-7593" to be "Succeeded or Failed"
Jun 22 16:19:19.039: INFO: Pod "pod-subpath-test-inlinevolume-qzz8": Phase="Pending", Reason="", readiness=false. Elapsed: 47.10162ms
Jun 22 16:19:21.099: INFO: Pod "pod-subpath-test-inlinevolume-qzz8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107159716s
Jun 22 16:19:23.088: INFO: Pod "pod-subpath-test-inlinevolume-qzz8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096250397s
Jun 22 16:19:25.085: INFO: Pod "pod-subpath-test-inlinevolume-qzz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093550764s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:25.086: INFO: Pod "pod-subpath-test-inlinevolume-qzz8" satisfied condition "Succeeded or Failed"
Jun 22 16:19:25.132: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-subpath-test-inlinevolume-qzz8 container test-container-subpath-inlinevolume-qzz8: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:25.248: INFO: Waiting for pod pod-subpath-test-inlinevolume-qzz8 to disappear
Jun 22 16:19:25.294: INFO: Pod pod-subpath-test-inlinevolume-qzz8 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-qzz8
Jun 22 16:19:25.294: INFO: Deleting pod "pod-subpath-test-inlinevolume-qzz8" in namespace "provisioning-7593"
[1mSTEP[0m: Deleting pod
Jun 22 16:19:25.339: INFO: Deleting pod "pod-subpath-test-inlinevolume-qzz8" in namespace "provisioning-7593"
Jun 22 16:19:25.431: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7593" in namespace "provisioning-7593" to be "Succeeded or Failed"
Jun 22 16:19:25.477: INFO: Pod "hostpath-symlink-prep-provisioning-7593": Phase="Pending", Reason="", readiness=false. Elapsed: 45.968519ms
Jun 22 16:19:27.524: INFO: Pod "hostpath-symlink-prep-provisioning-7593": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093101569s
Jun 22 16:19:29.524: INFO: Pod "hostpath-symlink-prep-provisioning-7593": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092168823s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:29.524: INFO: Pod "hostpath-symlink-prep-provisioning-7593" satisfied condition "Succeeded or Failed"
Jun 22 16:19:29.524: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7593" in namespace "provisioning-7593"
Jun 22 16:19:29.575: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7593" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
Jun 22 16:19:29.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-7593" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90mtest/e2e/storage/testsuites/subpath.go:367[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":13,"skipped":41,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:29.743: INFO: Only supported for providers [azure] (not gce)
... skipping 125 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (block volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":91,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:31.917: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
Jun 22 16:19:24.753: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fa5d1056-45ac-49ab-845b-bb2740482b7a"
Jun 22 16:19:24.753: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fa5d1056-45ac-49ab-845b-bb2740482b7a" in namespace "pods-4689" to be "terminated with reason DeadlineExceeded"
Jun 22 16:19:24.804: INFO: Pod "pod-update-activedeadlineseconds-fa5d1056-45ac-49ab-845b-bb2740482b7a": Phase="Running", Reason="", readiness=true. Elapsed: 50.223298ms
Jun 22 16:19:26.856: INFO: Pod "pod-update-activedeadlineseconds-fa5d1056-45ac-49ab-845b-bb2740482b7a": Phase="Running", Reason="", readiness=true. Elapsed: 2.102478345s
Jun 22 16:19:28.854: INFO: Pod "pod-update-activedeadlineseconds-fa5d1056-45ac-49ab-845b-bb2740482b7a": Phase="Running", Reason="", readiness=true. Elapsed: 4.100363526s
Jun 22 16:19:30.858: INFO: Pod "pod-update-activedeadlineseconds-fa5d1056-45ac-49ab-845b-bb2740482b7a": Phase="Running", Reason="", readiness=true. Elapsed: 6.104580382s
Jun 22 16:19:32.853: INFO: Pod "pod-update-activedeadlineseconds-fa5d1056-45ac-49ab-845b-bb2740482b7a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 8.099274035s
Jun 22 16:19:32.853: INFO: Pod "pod-update-activedeadlineseconds-fa5d1056-45ac-49ab-845b-bb2740482b7a" satisfied condition "terminated with reason DeadlineExceeded"
[AfterEach] [sig-node] Pods
test/e2e/framework/framework.go:187
Jun 22 16:19:32.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "pods-4689" for this suite.
[32m• [SLOW TEST:15.413 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":72,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 65 lines ...
Jun 22 16:18:37.179: INFO: PersistentVolumeClaim csi-hostpaths8cl6 found but phase is Pending instead of Bound.
Jun 22 16:18:39.240: INFO: PersistentVolumeClaim csi-hostpaths8cl6 found but phase is Pending instead of Bound.
Jun 22 16:18:41.285: INFO: PersistentVolumeClaim csi-hostpaths8cl6 found but phase is Pending instead of Bound.
Jun 22 16:18:43.327: INFO: PersistentVolumeClaim csi-hostpaths8cl6 found and phase=Bound (14.385594199s)
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-chn2
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:18:43.462: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-chn2" in namespace "provisioning-8505" to be "Succeeded or Failed"
Jun 22 16:18:43.505: INFO: Pod "pod-subpath-test-dynamicpv-chn2": Phase="Pending", Reason="", readiness=false. Elapsed: 43.096624ms
Jun 22 16:18:45.556: INFO: Pod "pod-subpath-test-dynamicpv-chn2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094012996s
Jun 22 16:18:47.553: INFO: Pod "pod-subpath-test-dynamicpv-chn2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090755982s
Jun 22 16:18:49.563: INFO: Pod "pod-subpath-test-dynamicpv-chn2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10029984s
Jun 22 16:18:51.552: INFO: Pod "pod-subpath-test-dynamicpv-chn2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089690158s
Jun 22 16:18:53.552: INFO: Pod "pod-subpath-test-dynamicpv-chn2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.089313041s
... skipping 2 lines ...
Jun 22 16:18:59.562: INFO: Pod "pod-subpath-test-dynamicpv-chn2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.0996937s
Jun 22 16:19:01.550: INFO: Pod "pod-subpath-test-dynamicpv-chn2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.087396782s
Jun 22 16:19:03.556: INFO: Pod "pod-subpath-test-dynamicpv-chn2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.093988169s
Jun 22 16:19:05.552: INFO: Pod "pod-subpath-test-dynamicpv-chn2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.089513557s
Jun 22 16:19:07.551: INFO: Pod "pod-subpath-test-dynamicpv-chn2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.088854704s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:07.551: INFO: Pod "pod-subpath-test-dynamicpv-chn2" satisfied condition "Succeeded or Failed"
Jun 22 16:19:07.597: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-subpath-test-dynamicpv-chn2 container test-container-subpath-dynamicpv-chn2: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:07.699: INFO: Waiting for pod pod-subpath-test-dynamicpv-chn2 to disappear
Jun 22 16:19:07.745: INFO: Pod pod-subpath-test-dynamicpv-chn2 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-chn2
Jun 22 16:19:07.745: INFO: Deleting pod "pod-subpath-test-dynamicpv-chn2" in namespace "provisioning-8505"
... skipping 61 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:33.241: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 144 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and write from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:240[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":48,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:34.106: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: blockfs]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 6 lines ...
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 22 16:19:30.148: INFO: Waiting up to 5m0s for pod "security-context-0afccb03-e731-49c9-ad29-53ac3e8108d1" in namespace "security-context-159" to be "Succeeded or Failed"
Jun 22 16:19:30.196: INFO: Pod "security-context-0afccb03-e731-49c9-ad29-53ac3e8108d1": Phase="Pending", Reason="", readiness=false. Elapsed: 47.658721ms
Jun 22 16:19:32.241: INFO: Pod "security-context-0afccb03-e731-49c9-ad29-53ac3e8108d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093291462s
Jun 22 16:19:34.242: INFO: Pod "security-context-0afccb03-e731-49c9-ad29-53ac3e8108d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093653855s
Jun 22 16:19:36.243: INFO: Pod "security-context-0afccb03-e731-49c9-ad29-53ac3e8108d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09481728s
Jun 22 16:19:38.243: INFO: Pod "security-context-0afccb03-e731-49c9-ad29-53ac3e8108d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095295045s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:38.243: INFO: Pod "security-context-0afccb03-e731-49c9-ad29-53ac3e8108d1" satisfied condition "Succeeded or Failed"
Jun 22 16:19:38.289: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod security-context-0afccb03-e731-49c9-ad29-53ac3e8108d1 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:38.396: INFO: Waiting for pod security-context-0afccb03-e731-49c9-ad29-53ac3e8108d1 to disappear
Jun 22 16:19:38.442: INFO: Pod security-context-0afccb03-e731-49c9-ad29-53ac3e8108d1 no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.781 seconds][0m
[sig-node] Security Context
[90mtest/e2e/node/framework.go:23[0m
should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":48,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:38.563: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 25 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
test/e2e/common/node/security_context.go:101
Jun 22 16:19:24.518: INFO: Waiting up to 5m0s for pod "busybox-user-0-8d6c0aab-f0d1-4909-bc99-d737e4dd8286" in namespace "security-context-test-2884" to be "Succeeded or Failed"
Jun 22 16:19:24.562: INFO: Pod "busybox-user-0-8d6c0aab-f0d1-4909-bc99-d737e4dd8286": Phase="Pending", Reason="", readiness=false. Elapsed: 43.756339ms
Jun 22 16:19:26.608: INFO: Pod "busybox-user-0-8d6c0aab-f0d1-4909-bc99-d737e4dd8286": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089739969s
Jun 22 16:19:28.608: INFO: Pod "busybox-user-0-8d6c0aab-f0d1-4909-bc99-d737e4dd8286": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08932707s
Jun 22 16:19:30.607: INFO: Pod "busybox-user-0-8d6c0aab-f0d1-4909-bc99-d737e4dd8286": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089012164s
Jun 22 16:19:32.607: INFO: Pod "busybox-user-0-8d6c0aab-f0d1-4909-bc99-d737e4dd8286": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088608966s
Jun 22 16:19:34.610: INFO: Pod "busybox-user-0-8d6c0aab-f0d1-4909-bc99-d737e4dd8286": Phase="Pending", Reason="", readiness=false. Elapsed: 10.092125175s
Jun 22 16:19:36.608: INFO: Pod "busybox-user-0-8d6c0aab-f0d1-4909-bc99-d737e4dd8286": Phase="Pending", Reason="", readiness=false. Elapsed: 12.089657857s
Jun 22 16:19:38.608: INFO: Pod "busybox-user-0-8d6c0aab-f0d1-4909-bc99-d737e4dd8286": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.089854773s
Jun 22 16:19:38.608: INFO: Pod "busybox-user-0-8d6c0aab-f0d1-4909-bc99-d737e4dd8286" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 22 16:19:38.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-2884" for this suite.
... skipping 2 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
When creating a container with runAsUser
[90mtest/e2e/common/node/security_context.go:52[0m
should run the container with uid 0 [LinuxOnly] [NodeConformance]
[90mtest/e2e/common/node/security_context.go:101[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":9,"skipped":66,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:38.740: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 194 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] provisioning
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should mount multiple PV pointing to the same storage on the same node
[90mtest/e2e/storage/testsuites/provisioning.go:525[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node","total":-1,"completed":9,"skipped":72,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:39.366: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 24 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's cpu request [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 22 16:19:33.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d1482c3-31c4-4670-9f3d-0ed2cdbd0faa" in namespace "projected-7539" to be "Succeeded or Failed"
Jun 22 16:19:33.470: INFO: Pod "downwardapi-volume-0d1482c3-31c4-4670-9f3d-0ed2cdbd0faa": Phase="Pending", Reason="", readiness=false. Elapsed: 49.016142ms
Jun 22 16:19:35.524: INFO: Pod "downwardapi-volume-0d1482c3-31c4-4670-9f3d-0ed2cdbd0faa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103076943s
Jun 22 16:19:37.519: INFO: Pod "downwardapi-volume-0d1482c3-31c4-4670-9f3d-0ed2cdbd0faa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097821545s
Jun 22 16:19:39.520: INFO: Pod "downwardapi-volume-0d1482c3-31c4-4670-9f3d-0ed2cdbd0faa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098962444s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:39.520: INFO: Pod "downwardapi-volume-0d1482c3-31c4-4670-9f3d-0ed2cdbd0faa" satisfied condition "Succeeded or Failed"
Jun 22 16:19:39.571: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod downwardapi-volume-0d1482c3-31c4-4670-9f3d-0ed2cdbd0faa container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:39.691: INFO: Waiting for pod downwardapi-volume-0d1482c3-31c4-4670-9f3d-0ed2cdbd0faa to disappear
Jun 22 16:19:39.741: INFO: Pod downwardapi-volume-0d1482c3-31c4-4670-9f3d-0ed2cdbd0faa no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.852 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should provide container's cpu request [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:39.889: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
test/e2e/framework/framework.go:187
... skipping 48 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: cinder]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [openstack] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1092
[90m------------------------------[0m
... skipping 84 lines ...
[90mtest/e2e/apps/framework.go:23[0m
Basic StatefulSet functionality [StatefulSetBasic]
[90mtest/e2e/apps/statefulset.go:101[0m
should implement legacy replacement when the update strategy is OnDelete
[90mtest/e2e/apps/statefulset.go:507[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":7,"skipped":40,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:43.578: INFO: Only supported for providers [aws] (not gce)
... skipping 101 lines ...
[32m• [SLOW TEST:9.474 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should release NodePorts on delete
[90mtest/e2e/network/service.go:1594[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":9,"skipped":56,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
... skipping 143 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support multiple inline ephemeral volumes
[90mtest/e2e/storage/testsuites/ephemeral.go:315[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":5,"skipped":51,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:45.184: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
... skipping 99 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when create a pod with lifecycle hook
[90mtest/e2e/common/node/lifecycle_hook.go:46[0m
should execute prestop exec hook properly [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":92,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:45.840: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/framework/framework.go:187
... skipping 162 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:46.997: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/framework/framework.go:187
... skipping 204 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":7,"skipped":50,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:47.923: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
... skipping 82 lines ...
Jun 22 16:19:20.720: INFO: Pod "pvc-volume-tester-jds5k" satisfied condition "running"
[1mSTEP[0m: Deleting the previously created pod
Jun 22 16:19:25.721: INFO: Deleting pod "pvc-volume-tester-jds5k" in namespace "csi-mock-volumes-8206"
Jun 22 16:19:25.771: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jds5k" to be fully deleted
[1mSTEP[0m: Checking CSI driver logs
Jun 22 16:19:31.938: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1rX3FwTlA5cTlWZDdIbVk4OEp5elIta1NIQWRCSl93TzlwTmZXWWo1WW8ifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2NTU5MTUzNTQsImlhdCI6MTY1NTkxNDc1NCwiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLWUyZS1rb3BzLWdjZS1zdGFibGUuazhzLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjc2ktbW9jay12b2x1bWVzLTgyMDYiLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLWpkczVrIiwidWlkIjoiYmM1YmQ0NmQtYmI5Zi00YWUzLTlmZjUtNTkzODFkMmNlODVmIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiODVjM2M0OGEtZGNlNi00MTA4LTk5M2EtODE0MGZjNmIwODZiIn19LCJuYmYiOjE2NTU5MTQ3NTQsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTgyMDY6ZGVmYXVsdCJ9.jwk1I8ce9Kmi_bYbe4sCcdo7MbXtsSOzqQzn-vR7ZdLVMWVQtFPcduD1ES0gfuFXXhcuBWdxBILKyt2WwKWcPDfYHdGYLZ8Oeyn8Xv-cpHKfNfryK1LDIkHKVsFdVUCO_yYGP2i20ndf4w1lqtFniVsLzdZjHnpeD_y-_j_NkdWFoNQ5ZVWx063oWAzZnuKaCXEBcgwq47df3PJqZhG1y-mPOOMkIMJixkRGJeDbsESk1dupLgFcNoqYWr_EIr-geKaHNHaffNGnUS6fwIpmaenasop6IMybCRURWUoluVHuvCaIM3_a2druW_R3VZLcWCzkPrZH60GGNQOYzQoZVg","expirationTimestamp":"2022-06-22T16:29:14Z"}}
Jun 22 16:19:31.938: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"09ddac4b-f247-11ec-b667-b60307f65321","target_path":"/var/lib/kubelet/pods/bc5bd46d-bb9f-4ae3-9ff5-59381d2ce85f/volumes/kubernetes.io~csi/pvc-249b2200-7eb4-47a3-ab00-fa0133402ab2/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
[1mSTEP[0m: Deleting pod pvc-volume-tester-jds5k
Jun 22 16:19:31.938: INFO: Deleting pod "pvc-volume-tester-jds5k" in namespace "csi-mock-volumes-8206"
[1mSTEP[0m: Deleting claim pvc-68xkt
Jun 22 16:19:32.085: INFO: Waiting up to 2m0s for PersistentVolume pvc-249b2200-7eb4-47a3-ab00-fa0133402ab2 to get deleted
Jun 22 16:19:32.136: INFO: PersistentVolume pvc-249b2200-7eb4-47a3-ab00-fa0133402ab2 found and phase=Released (51.086757ms)
Jun 22 16:19:34.182: INFO: PersistentVolume pvc-249b2200-7eb4-47a3-ab00-fa0133402ab2 was removed
... skipping 85 lines ...
[32m• [SLOW TEST:9.508 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
listing mutating webhooks should work [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":10,"skipped":70,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 6 lines ...
[It] should support non-existent path
test/e2e/storage/testsuites/subpath.go:196
Jun 22 16:19:39.700: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 16:19:39.757: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-2s7d
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:19:39.843: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-2s7d" in namespace "provisioning-9182" to be "Succeeded or Failed"
Jun 22 16:19:39.891: INFO: Pod "pod-subpath-test-inlinevolume-2s7d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.369122ms
Jun 22 16:19:41.939: INFO: Pod "pod-subpath-test-inlinevolume-2s7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095930667s
Jun 22 16:19:43.940: INFO: Pod "pod-subpath-test-inlinevolume-2s7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096476345s
Jun 22 16:19:45.941: INFO: Pod "pod-subpath-test-inlinevolume-2s7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097947359s
Jun 22 16:19:47.939: INFO: Pod "pod-subpath-test-inlinevolume-2s7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096032353s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:47.939: INFO: Pod "pod-subpath-test-inlinevolume-2s7d" satisfied condition "Succeeded or Failed"
Jun 22 16:19:47.991: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-subpath-test-inlinevolume-2s7d container test-container-volume-inlinevolume-2s7d: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:48.089: INFO: Waiting for pod pod-subpath-test-inlinevolume-2s7d to disappear
Jun 22 16:19:48.140: INFO: Pod pod-subpath-test-inlinevolume-2s7d no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-2s7d
Jun 22 16:19:48.140: INFO: Deleting pod "pod-subpath-test-inlinevolume-2s7d" in namespace "provisioning-9182"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":75,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:48.367: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 181 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":54,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:49.120: INFO: Only supported for providers [azure] (not gce)
... skipping 100 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:19:49.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "container-runtime-28" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":11,"skipped":93,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:49.813: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 59 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:19:49.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-605" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":7,"skipped":66,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Security Context
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 9 lines ...
test/e2e/framework/framework.go:187
Jun 22 16:19:50.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-5577" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":11,"skipped":71,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Security Context
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:19:43.623: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
test/e2e/node/security_context.go:171
[1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 22 16:19:44.045: INFO: Waiting up to 5m0s for pod "security-context-b0feb134-d5a5-4435-bdb4-b96d66fc4ebc" in namespace "security-context-6348" to be "Succeeded or Failed"
Jun 22 16:19:44.106: INFO: Pod "security-context-b0feb134-d5a5-4435-bdb4-b96d66fc4ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 61.093257ms
Jun 22 16:19:46.155: INFO: Pod "security-context-b0feb134-d5a5-4435-bdb4-b96d66fc4ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10945071s
Jun 22 16:19:48.154: INFO: Pod "security-context-b0feb134-d5a5-4435-bdb4-b96d66fc4ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109197843s
Jun 22 16:19:50.153: INFO: Pod "security-context-b0feb134-d5a5-4435-bdb4-b96d66fc4ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107945791s
Jun 22 16:19:52.157: INFO: Pod "security-context-b0feb134-d5a5-4435-bdb4-b96d66fc4ebc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111302932s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:52.157: INFO: Pod "security-context-b0feb134-d5a5-4435-bdb4-b96d66fc4ebc" satisfied condition "Succeeded or Failed"
Jun 22 16:19:52.203: INFO: Trying to get logs from node nodes-us-west4-a-r4pg pod security-context-b0feb134-d5a5-4435-bdb4-b96d66fc4ebc container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:52.323: INFO: Waiting for pod security-context-b0feb134-d5a5-4435-bdb4-b96d66fc4ebc to disappear
Jun 22 16:19:52.369: INFO: Pod security-context-b0feb134-d5a5-4435-bdb4-b96d66fc4ebc no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.856 seconds][0m
[sig-node] Security Context
[90mtest/e2e/node/framework.go:23[0m
should support seccomp unconfined on the pod [LinuxOnly]
[90mtest/e2e/node/security_context.go:171[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":10,"skipped":57,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:52.503: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 42 lines ...
Jun 22 16:19:30.681: INFO: ExecWithOptions: Clientset creation
Jun 22 16:19:30.681: INFO: ExecWithOptions: execute(POST https://34.125.165.160/api/v1/namespaces/sctp-4270/pods/hostexec-nodes-us-west4-a-z5t6-826fc/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 22 16:19:30.998: INFO: exec nodes-us-west4-a-z5t6: command: lsmod | grep sctp
Jun 22 16:19:30.998: INFO: exec nodes-us-west4-a-z5t6: stdout: ""
Jun 22 16:19:30.998: INFO: exec nodes-us-west4-a-z5t6: stderr: ""
Jun 22 16:19:30.998: INFO: exec nodes-us-west4-a-z5t6: exit code: 0
Jun 22 16:19:30.998: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Jun 22 16:19:30.998: INFO: the sctp module is not loaded on node: nodes-us-west4-a-z5t6
[1mSTEP[0m: Deleting pod hostexec-nodes-us-west4-a-z5t6-826fc in namespace sctp-4270
[1mSTEP[0m: creating a pod with hostport on the selected node
[1mSTEP[0m: Launching the pod on node nodes-us-west4-a-z5t6
Jun 22 16:19:31.103: INFO: Waiting up to 5m0s for pod "hostport" in namespace "sctp-4270" to be "running and ready"
Jun 22 16:19:31.147: INFO: Pod "hostport": Phase="Pending", Reason="", readiness=false. Elapsed: 43.200055ms
... skipping 42 lines ...
[90mtest/e2e/network/common/framework.go:23[0m
should create a Pod with SCTP HostPort
[90mtest/e2e/network/service.go:4124[0m
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort","total":-1,"completed":10,"skipped":113,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:52.552: INFO: Only supported for providers [aws] (not gce)
... skipping 163 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should create read/write inline ephemeral volume
[90mtest/e2e/storage/testsuites/ephemeral.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":8,"skipped":77,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:53.942: INFO: Driver local doesn't support ext3 -- skipping
... skipping 370 lines ...
[90mtest/e2e/network/common/framework.go:23[0m
version v1
[90mtest/e2e/network/proxy.go:74[0m
should proxy through a service and a pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":11,"skipped":86,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:56.341: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 118 lines ...
Jun 22 16:19:20.399: INFO: PersistentVolumeClaim pvc-jctj7 found but phase is Pending instead of Bound.
Jun 22 16:19:22.444: INFO: PersistentVolumeClaim pvc-jctj7 found and phase=Bound (14.371297712s)
Jun 22 16:19:22.444: INFO: Waiting up to 3m0s for PersistentVolume local-5clqk to have phase Bound
Jun 22 16:19:22.490: INFO: PersistentVolume local-5clqk found and phase=Bound (45.286505ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-qkt7
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 22 16:19:22.635: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qkt7" in namespace "provisioning-9917" to be "Succeeded or Failed"
Jun 22 16:19:22.682: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 47.708391ms
Jun 22 16:19:24.729: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094185967s
Jun 22 16:19:26.731: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096595144s
Jun 22 16:19:28.730: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09529242s
Jun 22 16:19:30.730: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095381035s
Jun 22 16:19:32.730: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7": Phase="Running", Reason="", readiness=true. Elapsed: 10.095112252s
... skipping 7 lines ...
Jun 22 16:19:48.734: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7": Phase="Running", Reason="", readiness=true. Elapsed: 26.099201985s
Jun 22 16:19:50.730: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7": Phase="Running", Reason="", readiness=true. Elapsed: 28.09571833s
Jun 22 16:19:52.735: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7": Phase="Running", Reason="", readiness=true. Elapsed: 30.100656492s
Jun 22 16:19:54.729: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7": Phase="Running", Reason="", readiness=true. Elapsed: 32.093916318s
Jun 22 16:19:56.727: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.092008096s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:56.727: INFO: Pod "pod-subpath-test-preprovisionedpv-qkt7" satisfied condition "Succeeded or Failed"
Jun 22 16:19:56.773: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod pod-subpath-test-preprovisionedpv-qkt7 container test-container-subpath-preprovisionedpv-qkt7: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:56.872: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qkt7 to disappear
Jun 22 16:19:56.917: INFO: Pod pod-subpath-test-preprovisionedpv-qkt7 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-qkt7
Jun 22 16:19:56.917: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qkt7" in namespace "provisioning-9917"
... skipping 26 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should apply changes to a job status [Conformance]","total":-1,"completed":15,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:19:45.301: INFO: >>> kubeConfig: /root/.kube/config
... skipping 3 lines ...
[It] should support readOnly directory specified in the volumeMount
test/e2e/storage/testsuites/subpath.go:367
Jun 22 16:19:45.628: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 16:19:45.678: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-hfh6
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:19:45.729: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-hfh6" in namespace "provisioning-6526" to be "Succeeded or Failed"
Jun 22 16:19:45.775: INFO: Pod "pod-subpath-test-inlinevolume-hfh6": Phase="Pending", Reason="", readiness=false. Elapsed: 46.048545ms
Jun 22 16:19:47.821: INFO: Pod "pod-subpath-test-inlinevolume-hfh6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091757783s
Jun 22 16:19:49.829: INFO: Pod "pod-subpath-test-inlinevolume-hfh6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10013009s
Jun 22 16:19:51.826: INFO: Pod "pod-subpath-test-inlinevolume-hfh6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097429981s
Jun 22 16:19:53.826: INFO: Pod "pod-subpath-test-inlinevolume-hfh6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096597202s
Jun 22 16:19:55.823: INFO: Pod "pod-subpath-test-inlinevolume-hfh6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.094079749s
Jun 22 16:19:57.824: INFO: Pod "pod-subpath-test-inlinevolume-hfh6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.094726564s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:57.824: INFO: Pod "pod-subpath-test-inlinevolume-hfh6" satisfied condition "Succeeded or Failed"
Jun 22 16:19:57.870: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod pod-subpath-test-inlinevolume-hfh6 container test-container-subpath-inlinevolume-hfh6: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:19:57.968: INFO: Waiting for pod pod-subpath-test-inlinevolume-hfh6 to disappear
Jun 22 16:19:58.018: INFO: Pod pod-subpath-test-inlinevolume-hfh6 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-hfh6
Jun 22 16:19:58.018: INFO: Deleting pod "pod-subpath-test-inlinevolume-hfh6" in namespace "provisioning-6526"
... skipping 50 lines ...
[32m• [SLOW TEST:8.818 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should support remote command execution over websockets [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":72,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:19:59.658: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 77 lines ...
Jun 22 16:19:34.975: INFO: Pod "pvc-volume-tester-g6482": Phase="Running", Reason="", readiness=true. Elapsed: 12.092153452s
Jun 22 16:19:34.975: INFO: Pod "pvc-volume-tester-g6482" satisfied condition "running"
[1mSTEP[0m: Deleting the previously created pod
Jun 22 16:19:34.975: INFO: Deleting pod "pvc-volume-tester-g6482" in namespace "csi-mock-volumes-482"
Jun 22 16:19:35.020: INFO: Wait up to 5m0s for pod "pvc-volume-tester-g6482" to be fully deleted
[1mSTEP[0m: Checking CSI driver logs
Jun 22 16:19:45.171: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"125d4136-f247-11ec-80fa-0258a019f408","target_path":"/var/lib/kubelet/pods/7d08e4d1-c0d8-4c91-8d98-f4771ca600ec/volumes/kubernetes.io~csi/pvc-b21221e2-4151-4140-a2cc-514c0f919931/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
[1mSTEP[0m: Deleting pod pvc-volume-tester-g6482
Jun 22 16:19:45.172: INFO: Deleting pod "pvc-volume-tester-g6482" in namespace "csi-mock-volumes-482"
[1mSTEP[0m: Deleting claim pvc-5xcv5
Jun 22 16:19:45.303: INFO: Waiting up to 2m0s for PersistentVolume pvc-b21221e2-4151-4140-a2cc-514c0f919931 to get deleted
Jun 22 16:19:45.349: INFO: PersistentVolume pvc-b21221e2-4151-4140-a2cc-514c0f919931 found and phase=Released (46.507395ms)
Jun 22 16:19:47.392: INFO: PersistentVolume pvc-b21221e2-4151-4140-a2cc-514c0f919931 was removed
... skipping 44 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSIServiceAccountToken
[90mtest/e2e/storage/csi_mock_volume.go:1574[0m
token should not be plumbed down when CSIDriver is not deployed
[90mtest/e2e/storage/csi_mock_volume.go:1602[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":10,"skipped":79,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:01.310: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 26 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: azure-disk]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1577
[90m------------------------------[0m
... skipping 49 lines ...
[32m• [SLOW TEST:13.455 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should delete jobs and pods created by cronjob
[90mtest/e2e/apimachinery/garbage_collector.go:1145[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":8,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:01.410: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/framework/framework.go:187
... skipping 34 lines ...
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:2079
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":7,"skipped":74,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:19:48.111: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename resourcequota
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
[32m• [SLOW TEST:14.040 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a pod. [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":8,"skipped":74,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:02.173: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: tmpfs]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 8 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
test/e2e/common/storage/empty_dir.go:67
[1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs
Jun 22 16:19:54.377: INFO: Waiting up to 5m0s for pod "pod-8b01334d-701e-4118-8ddb-41f3e58ac65a" in namespace "emptydir-5579" to be "Succeeded or Failed"
Jun 22 16:19:54.440: INFO: Pod "pod-8b01334d-701e-4118-8ddb-41f3e58ac65a": Phase="Pending", Reason="", readiness=false. Elapsed: 63.42238ms
Jun 22 16:19:56.488: INFO: Pod "pod-8b01334d-701e-4118-8ddb-41f3e58ac65a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111397101s
Jun 22 16:19:58.490: INFO: Pod "pod-8b01334d-701e-4118-8ddb-41f3e58ac65a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113087589s
Jun 22 16:20:00.490: INFO: Pod "pod-8b01334d-701e-4118-8ddb-41f3e58ac65a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112871537s
Jun 22 16:20:02.489: INFO: Pod "pod-8b01334d-701e-4118-8ddb-41f3e58ac65a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112186208s
Jun 22 16:20:04.490: INFO: Pod "pod-8b01334d-701e-4118-8ddb-41f3e58ac65a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112642881s
[1mSTEP[0m: Saw pod success
Jun 22 16:20:04.490: INFO: Pod "pod-8b01334d-701e-4118-8ddb-41f3e58ac65a" satisfied condition "Succeeded or Failed"
Jun 22 16:20:04.538: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-8b01334d-701e-4118-8ddb-41f3e58ac65a container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:20:04.659: INFO: Waiting for pod pod-8b01334d-701e-4118-8ddb-41f3e58ac65a to disappear
Jun 22 16:20:04.706: INFO: Pod pod-8b01334d-701e-4118-8ddb-41f3e58ac65a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 6 lines ...
[90mtest/e2e/common/storage/framework.go:23[0m
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/empty_dir.go:48[0m
files with FSGroup ownership should support (root,0644,tmpfs)
[90mtest/e2e/common/storage/empty_dir.go:67[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":9,"skipped":83,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:04.880: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:187
... skipping 182 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
[90mtest/e2e/storage/framework/testsuite.go:50[0m
Verify if offline PVC expansion works
[90mtest/e2e/storage/testsuites/volume_expand.go:176[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":10,"skipped":64,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:05.231: INFO: Only supported for providers [openstack] (not gce)
... skipping 25 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-test-volume-map-92b21539-7c9a-4a19-b567-85fb2f05e92c
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 16:19:52.963: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25" in namespace "configmap-9818" to be "Succeeded or Failed"
Jun 22 16:19:53.012: INFO: Pod "pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25": Phase="Pending", Reason="", readiness=false. Elapsed: 48.694626ms
Jun 22 16:19:55.076: INFO: Pod "pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112521288s
Jun 22 16:19:57.065: INFO: Pod "pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101268595s
Jun 22 16:19:59.064: INFO: Pod "pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100655316s
Jun 22 16:20:01.063: INFO: Pod "pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099726623s
Jun 22 16:20:03.078: INFO: Pod "pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11469876s
Jun 22 16:20:05.084: INFO: Pod "pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.120697423s
[1mSTEP[0m: Saw pod success
Jun 22 16:20:05.084: INFO: Pod "pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25" satisfied condition "Succeeded or Failed"
Jun 22 16:20:05.176: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:20:05.287: INFO: Waiting for pod pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25 to disappear
Jun 22 16:20:05.336: INFO: Pod pod-configmaps-1b8cab59-9b82-4066-ab4a-97353d8e5c25 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:12.927 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":16,"skipped":51,"failed":0}
[BeforeEach] [sig-node] Secrets
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:19:58.217: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename secrets
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: creating secret secrets-4970/secret-test-2bfbb4bb-7cb3-423f-be81-f6ba7d46566a
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 22 16:19:58.635: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf3927e2-5df4-4e57-87f2-b460cc5f9e77" in namespace "secrets-4970" to be "Succeeded or Failed"
Jun 22 16:19:58.681: INFO: Pod "pod-configmaps-bf3927e2-5df4-4e57-87f2-b460cc5f9e77": Phase="Pending", Reason="", readiness=false. Elapsed: 45.472186ms
Jun 22 16:20:00.733: INFO: Pod "pod-configmaps-bf3927e2-5df4-4e57-87f2-b460cc5f9e77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097188427s
Jun 22 16:20:02.727: INFO: Pod "pod-configmaps-bf3927e2-5df4-4e57-87f2-b460cc5f9e77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091720235s
Jun 22 16:20:04.728: INFO: Pod "pod-configmaps-bf3927e2-5df4-4e57-87f2-b460cc5f9e77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092750063s
Jun 22 16:20:06.727: INFO: Pod "pod-configmaps-bf3927e2-5df4-4e57-87f2-b460cc5f9e77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091686232s
[1mSTEP[0m: Saw pod success
Jun 22 16:20:06.727: INFO: Pod "pod-configmaps-bf3927e2-5df4-4e57-87f2-b460cc5f9e77" satisfied condition "Succeeded or Failed"
Jun 22 16:20:06.773: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod pod-configmaps-bf3927e2-5df4-4e57-87f2-b460cc5f9e77 container env-test: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:20:06.879: INFO: Waiting for pod pod-configmaps-bf3927e2-5df4-4e57-87f2-b460cc5f9e77 to disappear
Jun 22 16:20:06.927: INFO: Pod pod-configmaps-bf3927e2-5df4-4e57-87f2-b460cc5f9e77 no longer exists
[AfterEach] [sig-node] Secrets
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.807 seconds][0m
[sig-node] Secrets
[90mtest/e2e/common/node/framework.go:23[0m
should be consumable via the environment [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":51,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:07.041: INFO: Only supported for providers [vsphere] (not gce)
... skipping 60 lines ...
[36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":51,"failed":0}
[BeforeEach] version v1
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:19:57.909: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename proxy
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 48 lines ...
[90mtest/e2e/network/common/framework.go:23[0m
version v1
[90mtest/e2e/network/proxy.go:74[0m
A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":7,"skipped":51,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:07.174: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 57 lines ...
[32m• [SLOW TEST:15.846 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should mutate custom resource with different stored version [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":11,"skipped":129,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:08.454: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/framework/framework.go:187
... skipping 75 lines ...
Jun 22 16:19:59.981: INFO: Running '/logs/artifacts/e34f5ceb-f244-11ec-8dfe-daa417708791/kubectl --server=https://34.125.165.160 --kubeconfig=/root/.kube/config --namespace=kubectl-967 create -f -'
Jun 22 16:20:00.336: INFO: stderr: ""
Jun 22 16:20:00.336: INFO: stdout: "pod/httpd created\n"
Jun 22 16:20:00.336: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 22 16:20:00.336: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-967" to be "running and ready"
Jun 22 16:20:00.379: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 43.366922ms
Jun 22 16:20:00.380: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west4-a-r4pg' to be 'Running' but was 'Pending'
Jun 22 16:20:02.433: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.09721885s
Jun 22 16:20:02.433: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west4-a-r4pg' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC }]
Jun 22 16:20:04.425: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.088455081s
Jun 22 16:20:04.425: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west4-a-r4pg' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC }]
Jun 22 16:20:06.428: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.091454762s
Jun 22 16:20:06.428: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west4-a-r4pg' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC }]
Jun 22 16:20:08.424: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.08773895s
Jun 22 16:20:08.424: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west4-a-r4pg' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC }]
Jun 22 16:20:10.428: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.091443453s
Jun 22 16:20:10.428: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west4-a-r4pg' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 16:20:00 +0000 UTC }]
Jun 22 16:20:12.426: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.089497774s
Jun 22 16:20:12.426: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 22 16:20:12.426: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support port-forward
test/e2e/kubectl/kubectl.go:665
[1mSTEP[0m: forwarding the container port to a local port
... skipping 26 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Simple pod
[90mtest/e2e/kubectl/kubectl.go:407[0m
should support port-forward
[90mtest/e2e/kubectl/kubectl.go:665[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":13,"skipped":76,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:13.807: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 54 lines ...
[90mtest/e2e/apps/framework.go:23[0m
Basic StatefulSet functionality [StatefulSetBasic]
[90mtest/e2e/apps/statefulset.go:101[0m
should have a working scale subresource [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":8,"skipped":49,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-test-volume-map-d4b4f345-b5f4-4d18-b890-2dc6f6265f6a
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 22 16:20:02.622: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24" in namespace "configmap-4302" to be "Succeeded or Failed"
Jun 22 16:20:02.667: INFO: Pod "pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24": Phase="Pending", Reason="", readiness=false. Elapsed: 45.376636ms
Jun 22 16:20:04.715: INFO: Pod "pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092637959s
Jun 22 16:20:06.716: INFO: Pod "pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094234016s
Jun 22 16:20:08.716: INFO: Pod "pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093860557s
Jun 22 16:20:10.717: INFO: Pod "pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094567499s
Jun 22 16:20:12.717: INFO: Pod "pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24": Phase="Pending", Reason="", readiness=false. Elapsed: 10.094939499s
Jun 22 16:20:14.717: INFO: Pod "pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.094821199s
[1mSTEP[0m: Saw pod success
Jun 22 16:20:14.717: INFO: Pod "pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24" satisfied condition "Succeeded or Failed"
Jun 22 16:20:14.767: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:20:14.877: INFO: Waiting for pod pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24 to disappear
Jun 22 16:20:14.924: INFO: Pod pod-configmaps-c1a5b574-40b5-4f20-b0d2-a186b337df24 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:12.842 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":80,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:15.060: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 98 lines ...
[32m• [SLOW TEST:70.915 seconds][0m
[sig-storage] Secrets
[90mtest/e2e/common/storage/framework.go:23[0m
optional updates should be reflected in volume [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:15.565: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 189 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: azure-file]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:2079
[90m------------------------------[0m
... skipping 31 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/projected_downwardapi.go:108
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 22 16:20:05.631: INFO: Waiting up to 5m0s for pod "metadata-volume-9f722f9f-709c-45dc-a8e6-6e5e5999d871" in namespace "projected-1574" to be "Succeeded or Failed"
Jun 22 16:20:05.676: INFO: Pod "metadata-volume-9f722f9f-709c-45dc-a8e6-6e5e5999d871": Phase="Pending", Reason="", readiness=false. Elapsed: 45.304326ms
Jun 22 16:20:07.726: INFO: Pod "metadata-volume-9f722f9f-709c-45dc-a8e6-6e5e5999d871": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094955421s
Jun 22 16:20:09.721: INFO: Pod "metadata-volume-9f722f9f-709c-45dc-a8e6-6e5e5999d871": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090694081s
Jun 22 16:20:11.721: INFO: Pod "metadata-volume-9f722f9f-709c-45dc-a8e6-6e5e5999d871": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090550336s
Jun 22 16:20:13.719: INFO: Pod "metadata-volume-9f722f9f-709c-45dc-a8e6-6e5e5999d871": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08862924s
Jun 22 16:20:15.720: INFO: Pod "metadata-volume-9f722f9f-709c-45dc-a8e6-6e5e5999d871": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089522674s
[1mSTEP[0m: Saw pod success
Jun 22 16:20:15.720: INFO: Pod "metadata-volume-9f722f9f-709c-45dc-a8e6-6e5e5999d871" satisfied condition "Succeeded or Failed"
Jun 22 16:20:15.763: INFO: Trying to get logs from node nodes-us-west4-a-z5t6 pod metadata-volume-9f722f9f-709c-45dc-a8e6-6e5e5999d871 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:20:15.873: INFO: Waiting for pod metadata-volume-9f722f9f-709c-45dc-a8e6-6e5e5999d871 to disappear
Jun 22 16:20:15.920: INFO: Pod metadata-volume-9f722f9f-709c-45dc-a8e6-6e5e5999d871 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.763 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/projected_downwardapi.go:108[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":11,"skipped":73,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:16.055: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 122 lines ...
[32m• [SLOW TEST:26.667 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
should run the lifecycle of a Deployment [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":12,"skipped":106,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:16.551: INFO: Only supported for providers [aws] (not gce)
... skipping 24 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium
Jun 22 16:20:08.881: INFO: Waiting up to 5m0s for pod "pod-25da5fa4-6081-41c9-a512-743a99f324c7" in namespace "emptydir-8447" to be "Succeeded or Failed"
Jun 22 16:20:08.925: INFO: Pod "pod-25da5fa4-6081-41c9-a512-743a99f324c7": Phase="Pending", Reason="", readiness=false. Elapsed: 43.575359ms
Jun 22 16:20:10.970: INFO: Pod "pod-25da5fa4-6081-41c9-a512-743a99f324c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088818776s
Jun 22 16:20:12.968: INFO: Pod "pod-25da5fa4-6081-41c9-a512-743a99f324c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087245192s
Jun 22 16:20:14.969: INFO: Pod "pod-25da5fa4-6081-41c9-a512-743a99f324c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08770756s
Jun 22 16:20:16.974: INFO: Pod "pod-25da5fa4-6081-41c9-a512-743a99f324c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093185185s
[1mSTEP[0m: Saw pod success
Jun 22 16:20:16.974: INFO: Pod "pod-25da5fa4-6081-41c9-a512-743a99f324c7" satisfied condition "Succeeded or Failed"
Jun 22 16:20:17.052: INFO: Trying to get logs from node nodes-us-west4-a-7gg3 pod pod-25da5fa4-6081-41c9-a512-743a99f324c7 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:20:17.170: INFO: Waiting for pod pod-25da5fa4-6081-41c9-a512-743a99f324c7 to disappear
Jun 22 16:20:17.213: INFO: Pod pod-25da5fa4-6081-41c9-a512-743a99f324c7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.806 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":143,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl Port forwarding
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 51 lines ...
[90mtest/e2e/kubectl/portforward.go:476[0m
that expects a client request
[90mtest/e2e/kubectl/portforward.go:477[0m
should support a client that connects, sends DATA, and disconnects
[90mtest/e2e/kubectl/portforward.go:481[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":11,"skipped":85,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] DNS
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 23 lines ...
Jun 22 16:19:51.888: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:51.936: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:51.988: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:52.040: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:52.088: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:52.134: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:52.135: INFO: Lookups using dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local]
Jun 22 16:19:57.181: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:57.228: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:57.274: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:57.318: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:57.364: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:57.408: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:57.453: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:57.500: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:19:57.500: INFO: Lookups using dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local]
Jun 22 16:20:02.191: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:02.237: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:02.287: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:02.334: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:02.381: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:02.432: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:02.479: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:02.524: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:02.524: INFO: Lookups using dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local]
Jun 22 16:20:07.187: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:07.232: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:07.278: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:07.324: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:07.372: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:07.421: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:07.471: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:07.520: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:07.520: INFO: Lookups using dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local]
Jun 22 16:20:12.189: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:12.254: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:12.304: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:12.355: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:12.401: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:12.447: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:12.494: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:12.540: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:12.540: INFO: Lookups using dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local]
Jun 22 16:20:17.182: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:17.226: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:17.285: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:17.344: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:17.390: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:17.444: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:17.490: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:17.535: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local from pod dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275: the server could not find the requested resource (get pods dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275)
Jun 22 16:20:17.535: INFO: Lookups using dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5913.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5913.svc.cluster.local jessie_udp@dns-test-service-2.dns-5913.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5913.svc.cluster.local]
Jun 22 16:20:22.539: INFO: DNS probes using dns-5913/dns-test-e0af7788-25e2-4b9c-90c0-3cd0c33e1275 succeeded
[1mSTEP[0m: deleting the pod
[1mSTEP[0m: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
[32m• [SLOW TEST:37.579 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should provide DNS for pods for Subdomain [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":6,"skipped":53,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:22.790: INFO: Driver local doesn't support ext3 -- skipping
... skipping 119 lines ...
[90mtest/e2e/apps/framework.go:23[0m
Basic StatefulSet functionality [StatefulSetBasic]
[90mtest/e2e/apps/statefulset.go:101[0m
should not deadlock when a pod's predecessor fails
[90mtest/e2e/apps/statefulset.go:256[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":12,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:24.831: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/framework/framework.go:187
... skipping 218 lines ...
[90mtest/e2e/node/framework.go:23[0m
Pod Container Status
[90mtest/e2e/node/pods.go:202[0m
should never report success for a pending container
[90mtest/e2e/node/pods.go:208[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":10,"skipped":79,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 76 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and read from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:234[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":9,"skipped":60,"failed":0}
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 22 16:20:05.453: INFO: >>> kubeConfig: /root/.kube/config
... skipping 25 lines ...
Jun 22 16:20:20.737: INFO: PersistentVolumeClaim pvc-wkhrt found but phase is Pending instead of Bound.
Jun 22 16:20:22.789: INFO: PersistentVolumeClaim pvc-wkhrt found and phase=Bound (12.318147299s)
Jun 22 16:20:22.789: INFO: Waiting up to 3m0s for PersistentVolume local-kmrq7 to have phase Bound
Jun 22 16:20:22.835: INFO: PersistentVolume local-kmrq7 found and phase=Bound (46.732891ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-j47n
[1mSTEP[0m: Creating a pod to test subpath
Jun 22 16:20:22.981: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j47n" in namespace "provisioning-6476" to be "Succeeded or Failed"
Jun 22 16:20:23.027: INFO: Pod "pod-subpath-test-preprovisionedpv-j47n": Phase="Pending", Reason="", readiness=false. Elapsed: 45.552015ms
Jun 22 16:20:25.073: INFO: Pod "pod-subpath-test-preprovisionedpv-j47n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091733839s
Jun 22 16:20:27.074: INFO: Pod "pod-subpath-test-preprovisionedpv-j47n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092051171s
Jun 22 16:20:29.072: INFO: Pod "pod-subpath-test-preprovisionedpv-j47n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09050284s
Jun 22 16:20:31.073: INFO: Pod "pod-subpath-test-preprovisionedpv-j47n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091810084s
[1mSTEP[0m: Saw pod success
Jun 22 16:20:31.073: INFO: Pod "pod-subpath-test-preprovisionedpv-j47n" satisfied condition "Succeeded or Failed"
Jun 22 16:20:31.116: INFO: Trying to get logs from node nodes-us-west4-a-m34f pod pod-subpath-test-preprovisionedpv-j47n container test-container-volume-preprovisionedpv-j47n: <nil>
[1mSTEP[0m: delete the pod
Jun 22 16:20:31.230: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j47n to disappear
Jun 22 16:20:31.282: INFO: Pod pod-subpath-test-preprovisionedpv-j47n no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-j47n
Jun 22 16:20:31.282: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j47n" in namespace "provisioning-6476"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":12,"skipped":62,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:31.971: INFO: Driver local doesn't support ext3 -- skipping
... skipping 181 lines ...
[32m• [SLOW TEST:25.331 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
[90mtest/e2e/network/service.go:933[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":8,"skipped":56,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:32.586: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
[1mSTEP[0m: Building a namespace api object, basename volume
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should store data
test/e2e/storage/testsuites/volumes.go:161
Jun 22 16:19:40.302: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 22 16:19:40.414: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-1538" in namespace "volume-1538" to be "Succeeded or Failed"
Jun 22 16:19:40.463: INFO: Pod "hostpath-symlink-prep-volume-1538": Phase="Pending", Reason="", readiness=false. Elapsed: 48.577471ms
Jun 22 16:19:42.512: INFO: Pod "hostpath-symlink-prep-volume-1538": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098310106s
Jun 22 16:19:44.513: INFO: Pod "hostpath-symlink-prep-volume-1538": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098470941s
Jun 22 16:19:46.512: INFO: Pod "hostpath-symlink-prep-volume-1538": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097808167s
Jun 22 16:19:48.512: INFO: Pod "hostpath-symlink-prep-volume-1538": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097557283s
Jun 22 16:19:50.514: INFO: Pod "hostpath-symlink-prep-volume-1538": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099792476s
[1mSTEP[0m: Saw pod success
Jun 22 16:19:50.514: INFO: Pod "hostpath-symlink-prep-volume-1538" satisfied condition "Succeeded or Failed"
Jun 22 16:19:50.514: INFO: Deleting pod "hostpath-symlink-prep-volume-1538" in namespace "volume-1538"
Jun 22 16:19:50.575: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-1538" to be fully deleted
Jun 22 16:19:50.623: INFO: Creating resource for inline volume
[1mSTEP[0m: starting hostpathsymlink-injector
Jun 22 16:19:50.677: INFO: Waiting up to 5m0s for pod "hostpathsymlink-injector" in namespace "volume-1538" to be "running"
Jun 22 16:19:50.725: INFO: Pod "hostpathsymlink-injector": Phase="Pending", Reason="", readiness=false. Elapsed: 48.157611ms
... skipping 75 lines ...
Jun 22 16:20:23.499: INFO: Pod hostpathsymlink-client still exists
Jun 22 16:20:25.452: INFO: Waiting for pod hostpathsymlink-client to disappear
Jun 22 16:20:25.501: INFO: Pod hostpathsymlink-client still exists
Jun 22 16:20:27.452: INFO: Waiting for pod hostpathsymlink-client to disappear
Jun 22 16:20:27.499: INFO: Pod hostpathsymlink-client no longer exists
[1mSTEP[0m: cleaning the environment after hostpathsymlink
Jun 22 16:20:27.551: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-1538" in namespace "volume-1538" to be "Succeeded or Failed"
Jun 22 16:20:27.599: INFO: Pod "hostpath-symlink-prep-volume-1538": Phase="Pending", Reason="", readiness=false. Elapsed: 48.614093ms
Jun 22 16:20:29.683: INFO: Pod "hostpath-symlink-prep-volume-1538": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132552715s
Jun 22 16:20:31.647: INFO: Pod "hostpath-symlink-prep-volume-1538": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096104187s
Jun 22 16:20:33.648: INFO: Pod "hostpath-symlink-prep-volume-1538": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096892362s
[1mSTEP[0m: Saw pod success
Jun 22 16:20:33.648: INFO: Pod "hostpath-symlink-prep-volume-1538" satisfied condition "Succeeded or Failed"
Jun 22 16:20:33.648: INFO: Deleting pod "hostpath-symlink-prep-volume-1538" in namespace "volume-1538"
Jun 22 16:20:33.702: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-1538" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/framework/framework.go:187
Jun 22 16:20:33.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-1538" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":11,"skipped":92,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 22 16:20:33.872: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 40454 lines ...
numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=57\nI0622 16:24:39.218336 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.826486ms\"\nI0622 16:24:41.541157 10 service.go:322] \"Service updated ports\" service=\"endpointslicemirroring-598/example-custom-endpoints\" portCount=0\nI0622 16:24:41.541210 10 service.go:462] \"Removing service port\" portName=\"endpointslicemirroring-598/example-custom-endpoints:example\"\nI0622 16:24:41.541317 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:41.581303 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=57\nI0622 16:24:41.588069 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.859126ms\"\nI0622 16:24:42.225920 10 service.go:322] \"Service updated ports\" service=\"webhook-9595/e2e-test-webhook\" portCount=0\nI0622 16:24:42.226084 10 service.go:462] \"Removing service port\" portName=\"webhook-9595/e2e-test-webhook\"\nI0622 16:24:42.226211 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:42.268551 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=54\nI0622 16:24:42.273969 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.003262ms\"\nI0622 16:24:42.933663 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:42.970265 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=56\nI0622 16:24:42.975781 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.296834ms\"\nI0622 16:24:43.976119 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:44.011980 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:24:44.017412 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.464606ms\"\nI0622 16:24:44.831036 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:44.866107 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=55\nI0622 16:24:44.871559 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.695236ms\"\nI0622 16:24:45.871921 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:45.919082 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:24:45.926398 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.647755ms\"\nI0622 16:24:48.614159 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:48.648564 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=55\nI0622 16:24:48.653898 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.875067ms\"\nI0622 16:24:48.664754 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:48.711657 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:24:48.717458 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.962211ms\"\nI0622 16:24:49.717725 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:49.754066 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=52\nI0622 16:24:49.759499 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.914423ms\"\nI0622 16:25:00.510039 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:00.550303 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=56\nI0622 16:25:00.556052 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.307316ms\"\nI0622 16:25:00.607654 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:00.654764 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:25:00.663593 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.088157ms\"\nI0622 16:25:01.663925 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:01.700667 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=52\nI0622 16:25:01.706157 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.391848ms\"\nI0622 16:25:03.004629 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:03.040224 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=55\nI0622 16:25:03.046215 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.727491ms\"\nI0622 16:25:04.047162 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:04.111345 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:25:04.117973 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.042503ms\"\nI0622 16:25:04.758081 10 service.go:322] \"Service updated ports\" service=\"services-656/nodeport-collision-1\" portCount=1\nI0622 16:25:04.758142 10 service.go:437] \"Adding new service port\" portName=\"services-656/nodeport-collision-1\" servicePort=\"100.68.212.255:80/TCP\"\nI0622 16:25:04.758247 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:04.795798 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=52\nI0622 16:25:04.801584 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.448799ms\"\nI0622 16:25:04.882617 10 service.go:322] \"Service updated ports\" service=\"services-656/nodeport-collision-1\" portCount=0\nI0622 16:25:04.958376 10 service.go:322] \"Service updated ports\" service=\"services-656/nodeport-collision-2\" portCount=1\nI0622 16:25:05.801813 10 service.go:462] \"Removing service port\" portName=\"services-656/nodeport-collision-1\"\nI0622 16:25:05.801969 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:05.838276 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=52\nI0622 16:25:05.844110 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.338849ms\"\nI0622 16:25:06.514222 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:06.562434 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=56\nI0622 16:25:06.569376 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.288447ms\"\nI0622 16:25:07.570142 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:07.606387 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:25:07.612507 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.565497ms\"\nI0622 16:25:18.814579 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:18.877390 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=55\nI0622 16:25:18.886560 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"72.153343ms\"\nI0622 16:25:18.904266 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:18.964540 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:25:18.985022 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"81.03025ms\"\nI0622 16:25:19.986119 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:20.022381 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=52\nI0622 16:25:20.027843 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.894681ms\"\nI0622 16:25:51.919383 10 service.go:322] \"Service updated ports\" service=\"webhook-4728/e2e-test-webhook\" portCount=1\nI0622 16:25:51.919470 10 service.go:437] \"Adding new service port\" portName=\"webhook-4728/e2e-test-webhook\" servicePort=\"100.65.196.214:8443/TCP\"\nI0622 16:25:51.919687 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:51.953826 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=52\nI0622 16:25:51.959464 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.000295ms\"\nI0622 16:25:51.959730 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:51.996208 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=57\nI0622 16:25:52.001515 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.010357ms\"\nI0622 16:25:55.715120 10 service.go:322] \"Service updated ports\" service=\"webhook-4728/e2e-test-webhook\" portCount=0\nI0622 16:25:55.715170 10 service.go:462] \"Removing service port\" portName=\"webhook-4728/e2e-test-webhook\"\nI0622 16:25:55.715277 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:55.751397 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=54\nI0622 16:25:55.757434 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.262833ms\"\nI0622 16:25:55.757682 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:55.813112 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=52\nI0622 16:25:55.828141 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.666034ms\"\nI0622 16:25:56.035411 10 service.go:322] \"Service updated ports\" service=\"conntrack-6783/svc-udp\" portCount=1\nI0622 16:25:56.829319 10 service.go:437] \"Adding new service port\" portName=\"conntrack-6783/svc-udp:udp\" servicePort=\"100.69.52.85:80/UDP\"\nI0622 16:25:56.829459 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:56.882111 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=52\nI0622 16:25:56.889563 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.304469ms\"\nI0622 16:25:59.177196 10 service.go:322] \"Service updated ports\" service=\"services-9488/svc-tolerate-unready\" portCount=1\nI0622 16:25:59.177256 10 service.go:437] \"Adding new service port\" portName=\"services-9488/svc-tolerate-unready:http\" servicePort=\"100.65.24.232:80/TCP\"\nI0622 16:25:59.177367 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:59.227688 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=21 numNATRules=52\nI0622 16:25:59.240054 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.790632ms\"\nI0622 16:25:59.240186 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:59.282714 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=21 numNATRules=52\nI0622 16:25:59.288784 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.690544ms\"\nI0622 16:26:02.017077 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:02.056190 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=60\nI0622 16:26:02.063590 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.642243ms\"\nI0622 16:26:08.133012 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:08.173921 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=60\nI0622 16:26:08.181302 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.393806ms\"\nI0622 16:26:08.181344 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:08.218565 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 16:26:08.221212 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.866723ms\"\nI0622 16:26:08.922839 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-6783/svc-udp:udp\" clusterIP=\"100.69.52.85\"\nI0622 16:26:08.922867 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:08.960909 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=65\nI0622 16:26:08.971777 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.097652ms\"\nI0622 16:26:09.362452 10 service.go:322] \"Service updated ports\" service=\"services-8087/nodeport-update-service\" portCount=1\nI0622 16:26:09.362508 10 service.go:437] \"Adding new service port\" portName=\"services-8087/nodeport-update-service\" servicePort=\"100.65.188.19:80/TCP\"\nI0622 16:26:09.362631 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:09.399841 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=65\nI0622 16:26:09.405724 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.208368ms\"\nI0622 16:26:09.454366 10 service.go:322] \"Service updated ports\" service=\"services-8087/nodeport-update-service\" portCount=1\nI0622 16:26:10.405876 10 service.go:437] \"Adding new service port\" portName=\"services-8087/nodeport-update-service:tcp-port\" servicePort=\"100.65.188.19:80/TCP\"\nI0622 16:26:10.405907 10 service.go:462] \"Removing service port\" portName=\"services-8087/nodeport-update-service\"\nI0622 16:26:10.406036 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:10.454425 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=26 numNATRules=65\nI0622 16:26:10.498018 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"92.188357ms\"\nI0622 16:26:11.745616 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:11.803183 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=26 numNATRules=65\nI0622 16:26:11.810992 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.614367ms\"\nI0622 16:26:20.955598 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:20.994086 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=73\nI0622 16:26:21.000475 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.013119ms\"\nI0622 16:26:23.342861 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:23.385229 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=76\nI0622 16:26:23.391322 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.599953ms\"\nI0622 16:26:25.454707 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:25.513997 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=74\nI0622 16:26:25.523606 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.046894ms\"\nI0622 16:26:25.523926 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:25.588804 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=71\nI0622 16:26:25.597847 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"74.197534ms\"\nI0622 16:26:26.536185 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:26.576538 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=28 numNATRules=62\nI0622 16:26:26.582520 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.480117ms\"\nI0622 16:26:28.103483 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:28.141306 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=7 numNATChains=25 numNATRules=61\nI0622 16:26:28.147234 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.876099ms\"\nI0622 16:26:28.970694 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:29.022162 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=7 numNATChains=25 numNATRules=59\nI0622 16:26:29.034387 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.831267ms\"\nI0622 16:26:30.035694 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:30.097991 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=24 numNATRules=58\nI0622 16:26:30.106332 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.821562ms\"\nI0622 16:26:43.210592 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:43.252030 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=24 numNATRules=58\nI0622 16:26:43.258012 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.53699ms\"\nI0622 16:26:44.522884 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:44.574882 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=8 numNATChains=24 numNATRules=55\nI0622 16:26:44.583519 10 service.go:322] \"Service updated ports\" service=\"conntrack-6783/svc-udp\" portCount=0\nI0622 16:26:44.590405 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"67.662723ms\"\nI0622 16:26:44.590460 10 service.go:462] \"Removing service port\" portName=\"conntrack-6783/svc-udp:udp\"\nI0622 16:26:44.590579 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:44.643499 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=22 numNATRules=53\nI0622 16:26:44.655668 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.209934ms\"\nI0622 16:26:45.656198 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:45.696149 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=22 numNATRules=53\nI0622 16:26:45.703505 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.41224ms\"\nI0622 16:26:47.934192 10 service.go:322] \"Service updated ports\" service=\"services-8087/nodeport-update-service\" portCount=2\nI0622 16:26:47.934250 10 service.go:439] \"Updating existing service port\" portName=\"services-8087/nodeport-update-service:tcp-port\" servicePort=\"100.65.188.19:80/TCP\"\nI0622 16:26:47.934268 10 service.go:437] \"Adding new service port\" portName=\"services-8087/nodeport-update-service:udp-port\" servicePort=\"100.65.188.19:80/UDP\"\nI0622 16:26:47.934381 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:47.971305 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=9 numNATChains=22 numNATRules=53\nI0622 16:26:47.977007 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.766398ms\"\nI0622 16:26:47.977404 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"services-8087/nodeport-update-service:udp-port\" clusterIP=\"100.65.188.19\"\nI0622 16:26:47.977492 10 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"services-8087/nodeport-update-service:udp-port\" nodePort=32295\nI0622 16:26:47.977504 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:48.013493 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=7 numNATChains=26 numNATRules=64\nI0622 16:26:48.027409 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.354781ms\"\nI0622 16:26:49.520461 10 service.go:322] \"Service updated ports\" service=\"services-1925/service-proxy-toggled\" portCount=1\nI0622 16:26:49.520516 10 service.go:437] \"Adding new service port\" portName=\"services-1925/service-proxy-toggled\" servicePort=\"100.71.87.162:80/TCP\"\nI0622 16:26:49.520995 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:49.573610 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=8 numNATChains=26 numNATRules=64\nI0622 16:26:49.580143 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.633355ms\"\nI0622 16:26:50.114155 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:50.153973 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=8 numNATChains=26 numNATRules=64\nI0622 16:26:50.160683 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.639228ms\"\nI0622 16:26:50.160725 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:50.190212 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 16:26:50.192711 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.982728ms\"\nI0622 16:26:50.581291 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:50.619066 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=8 numNATChains=26 numNATRules=64\nI0622 16:26:50.625893 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.682462ms\"\nI0622 16:26:51.156818 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:51.204493 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=13 numFilterChains=4 numFilterRules=7 numNATChains=28 numNATRules=69\nI0622 16:26:51.212721 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.018984ms\"\nI0622 16:26:52.213692 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:52.251430 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=14 numFilterChains=4 numFilterRules=7 numNATChains=29 numNATRules=72\nI0622 16:26:52.257685 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.153947ms\"\nI0622 16:26:54.881899 10 service.go:322] \"Service updated ports\" service=\"webhook-9825/e2e-test-webhook\" portCount=1\nI0622 16:26:54.881963 10 service.go:437] \"Adding new service port\" portName=\"webhook-9825/e2e-test-webhook\" servicePort=\"100.70.230.1:8443/TCP\"\nI0622 16:26:54.882078 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:54.937100 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=14 numFilterChains=4 numFilterRules=8 numNATChains=29 numNATRules=72\nI0622 16:26:54.947226 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.266321ms\"\nI0622 16:26:54.947391 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:54.996183 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=15 numFilterChains=4 numFilterRules=7 numNATChains=31 numNATRules=77\nI0622 16:26:55.002421 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.156414ms\"\nI0622 16:26:56.002899 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:56.041187 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=16 numFilterChains=4 numFilterRules=7 numNATChains=32 numNATRules=80\nI0622 16:26:56.047591 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.852464ms\"\nI0622 16:26:56.632882 10 service.go:322] \"Service updated ports\" service=\"webhook-9825/e2e-test-webhook\" portCount=0\nI0622 16:26:57.048168 10 service.go:462] \"Removing service port\" portName=\"webhook-9825/e2e-test-webhook\"\nI0622 16:26:57.048329 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:57.087255 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=7 numNATChains=32 numNATRules=77\nI0622 16:26:57.093891 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.743445ms\"\nI0622 16:27:00.270432 10 service.go:322] \"Service updated ports\" service=\"deployment-8835/test-rolling-update-with-lb\" portCount=0\nI0622 16:27:00.270479 10 service.go:462] \"Removing service port\" portName=\"deployment-8835/test-rolling-update-with-lb\"\nI0622 16:27:00.270584 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:00.307355 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=3 numNATChains=30 numNATRules=75\nI0622 16:27:00.313127 10 service_health.go:107] \"Closing healthcheck\" service=\"deployment-8835/test-rolling-update-with-lb\" port=31061\nE0622 16:27:00.313613 10 service_health.go:187] \"Healthcheck closed\" err=\"accept tcp [::]:31061: use of closed network connection\" service=\"deployment-8835/test-rolling-update-with-lb\"\nI0622 16:27:00.313652 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.174615ms\"\nI0622 16:27:20.669540 10 service.go:322] \"Service updated ports\" service=\"services-1925/service-proxy-toggled\" portCount=0\nI0622 16:27:20.669591 10 service.go:462] \"Removing service port\" portName=\"services-1925/service-proxy-toggled\"\nI0622 16:27:20.669821 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:20.708488 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=30 numNATRules=68\nI0622 16:27:20.714900 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.30766ms\"\nI0622 16:27:20.715089 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:20.749422 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=64\nI0622 16:27:20.755493 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.555512ms\"\nI0622 16:27:22.361540 10 service.go:322] \"Service updated ports\" service=\"services-9488/svc-tolerate-unready\" portCount=0\nI0622 16:27:22.361710 10 service.go:462] \"Removing service port\" portName=\"services-9488/svc-tolerate-unready:http\"\nI0622 16:27:22.361846 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:22.399023 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=59\nI0622 16:27:22.404496 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.789345ms\"\nI0622 16:27:23.057100 10 service.go:322] \"Service updated ports\" service=\"services-8087/nodeport-update-service\" portCount=0\nI0622 16:27:23.057145 10 service.go:462] \"Removing service port\" portName=\"services-8087/nodeport-update-service:tcp-port\"\nI0622 16:27:23.057158 10 service.go:462] \"Removing service port\" portName=\"services-8087/nodeport-update-service:udp-port\"\nI0622 16:27:23.057346 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:23.116456 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=42\nI0622 16:27:23.128795 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.646568ms\"\nI0622 16:27:24.129664 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:24.172805 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:27:24.178609 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.1305ms\"\nI0622 16:27:25.105920 10 service.go:322] \"Service updated ports\" service=\"webhook-3523/e2e-test-webhook\" portCount=1\nI0622 16:27:25.105992 10 service.go:437] \"Adding new service port\" portName=\"webhook-3523/e2e-test-webhook\" servicePort=\"100.70.31.14:8443/TCP\"\nI0622 16:27:25.106094 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:25.141790 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:27:25.146875 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.904002ms\"\nI0622 16:27:25.518218 10 service.go:322] \"Service updated ports\" service=\"services-1925/service-proxy-toggled\" portCount=1\nI0622 16:27:26.147830 10 service.go:437] \"Adding new service port\" portName=\"services-1925/service-proxy-toggled\" servicePort=\"100.71.87.162:80/TCP\"\nI0622 16:27:26.148012 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:26.183954 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:27:26.189715 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.934298ms\"\nI0622 16:27:26.549099 10 service.go:322] \"Service updated ports\" service=\"webhook-3523/e2e-test-webhook\" portCount=0\nI0622 16:27:27.189907 10 service.go:462] \"Removing service port\" portName=\"webhook-3523/e2e-test-webhook\"\nI0622 16:27:27.190080 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:27.224227 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=47\nI0622 16:27:27.229196 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.315352ms\"\nI0622 16:27:42.665730 10 service.go:322] \"Service updated ports\" service=\"sctp-313/sctp-endpoint-test\" portCount=1\nI0622 16:27:42.665789 10 service.go:437] \"Adding new service port\" portName=\"sctp-313/sctp-endpoint-test\" servicePort=\"100.65.149.156:5060/SCTP\"\nI0622 16:27:42.665898 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:42.702570 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 16:27:42.713242 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.45319ms\"\nI0622 16:27:42.713389 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:42.748988 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 16:27:42.754716 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.430773ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-nodes-us-west4-a-m34f ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-nodes-us-west4-a-r4pg ====\n2022/06/22 16:11:43 Running command:\nCommand env: (log-file=/var/log/kube-proxy.log, also-stdout=true, redirect-stderr=true)\nRun from directory: \nExecutable path: /usr/local/bin/kube-proxy\nArgs (comma-delimited): /usr/local/bin/kube-proxy,--cluster-cidr=100.96.0.0/11,--conntrack-max-per-core=131072,--hostname-override=nodes-us-west4-a-r4pg,--kubeconfig=/var/lib/kube-proxy/kubeconfig,--master=https://api.internal.e2e-e2e-kops-gce-stable.k8s.local,--oom-score-adj=-998,--v=2\n2022/06/22 16:11:43 Now listening for interrupts\nI0622 16:11:43.070553 11 flags.go:64] FLAG: --add-dir-header=\"false\"\nI0622 16:11:43.070623 11 flags.go:64] FLAG: --alsologtostderr=\"false\"\nI0622 16:11:43.070629 11 flags.go:64] FLAG: --bind-address=\"0.0.0.0\"\nI0622 16:11:43.070642 11 flags.go:64] FLAG: --bind-address-hard-fail=\"false\"\nI0622 16:11:43.070648 11 flags.go:64] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0622 16:11:43.070654 11 flags.go:64] FLAG: --cleanup=\"false\"\nI0622 16:11:43.070660 11 flags.go:64] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0622 16:11:43.070666 11 flags.go:64] FLAG: --config=\"\"\nI0622 16:11:43.070671 11 flags.go:64] FLAG: --config-sync-period=\"15m0s\"\nI0622 16:11:43.070677 11 flags.go:64] FLAG: --conntrack-max-per-core=\"131072\"\nI0622 16:11:43.070688 11 flags.go:64] FLAG: --conntrack-min=\"131072\"\nI0622 16:11:43.070694 11 flags.go:64] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0622 16:11:43.070699 11 flags.go:64] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0622 16:11:43.070704 11 flags.go:64] FLAG: --detect-local-mode=\"\"\nI0622 16:11:43.070710 11 flags.go:64] FLAG: --feature-gates=\"\"\nI0622 16:11:43.070717 11 flags.go:64] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0622 16:11:43.070728 11 flags.go:64] FLAG: --healthz-port=\"10256\"\nI0622 16:11:43.070736 11 flags.go:64] FLAG: --help=\"false\"\nI0622 16:11:43.070742 11 flags.go:64] FLAG: --hostname-override=\"nodes-us-west4-a-r4pg\"\nI0622 16:11:43.070749 11 flags.go:64] FLAG: --iptables-masquerade-bit=\"14\"\nI0622 16:11:43.070755 11 flags.go:64] FLAG: --iptables-min-sync-period=\"1s\"\nI0622 16:11:43.070761 11 flags.go:64] FLAG: --iptables-sync-period=\"30s\"\nI0622 16:11:43.070768 11 flags.go:64] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0622 16:11:43.070796 11 flags.go:64] FLAG: --ipvs-min-sync-period=\"0s\"\nI0622 16:11:43.070802 11 flags.go:64] FLAG: --ipvs-scheduler=\"\"\nI0622 16:11:43.070807 11 flags.go:64] FLAG: --ipvs-strict-arp=\"false\"\nI0622 16:11:43.070813 11 flags.go:64] FLAG: --ipvs-sync-period=\"30s\"\nI0622 16:11:43.070819 11 flags.go:64] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0622 16:11:43.070825 11 flags.go:64] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0622 16:11:43.070830 11 flags.go:64] FLAG: --ipvs-udp-timeout=\"0s\"\nI0622 16:11:43.070840 11 flags.go:64] FLAG: --kube-api-burst=\"10\"\nI0622 16:11:43.070846 11 flags.go:64] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0622 16:11:43.070853 11 flags.go:64] FLAG: --kube-api-qps=\"5\"\nI0622 16:11:43.070862 11 flags.go:64] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0622 16:11:43.070868 11 flags.go:64] FLAG: --log-backtrace-at=\":0\"\nI0622 16:11:43.070878 11 flags.go:64] FLAG: --log-dir=\"\"\nI0622 16:11:43.070885 11 flags.go:64] FLAG: --log-file=\"\"\nI0622 16:11:43.070896 11 flags.go:64] FLAG: --log-file-max-size=\"1800\"\nI0622 16:11:43.070903 11 flags.go:64] FLAG: --log-flush-frequency=\"5s\"\nI0622 16:11:43.070918 11 flags.go:64] FLAG: --logtostderr=\"true\"\nI0622 16:11:43.070928 11 flags.go:64] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0622 16:11:43.070942 11 flags.go:64] FLAG: --masquerade-all=\"false\"\nI0622 16:11:43.070949 11 flags.go:64] FLAG: --master=\"https://api.internal.e2e-e2e-kops-gce-stable.k8s.local\"\nI0622 16:11:43.070956 11 flags.go:64] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0622 16:11:43.070967 11 flags.go:64] FLAG: --metrics-port=\"10249\"\nI0622 16:11:43.070973 11 flags.go:64] FLAG: --nodeport-addresses=\"[]\"\nI0622 16:11:43.070985 11 flags.go:64] FLAG: --one-output=\"false\"\nI0622 16:11:43.070991 11 flags.go:64] FLAG: --oom-score-adj=\"-998\"\nI0622 16:11:43.070997 11 flags.go:64] FLAG: --pod-bridge-interface=\"\"\nI0622 16:11:43.071003 11 flags.go:64] FLAG: --pod-interface-name-prefix=\"\"\nI0622 16:11:43.071009 11 flags.go:64] FLAG: --profiling=\"false\"\nI0622 16:11:43.071019 11 flags.go:64] FLAG: --proxy-mode=\"\"\nI0622 16:11:43.071026 11 flags.go:64] FLAG: --proxy-port-range=\"\"\nI0622 16:11:43.071033 11 flags.go:64] FLAG: --show-hidden-metrics-for-version=\"\"\nI0622 16:11:43.071039 11 flags.go:64] FLAG: --skip-headers=\"false\"\nI0622 16:11:43.071044 11 flags.go:64] FLAG: --skip-log-headers=\"false\"\nI0622 16:11:43.071050 11 flags.go:64] FLAG: --stderrthreshold=\"2\"\nI0622 16:11:43.071056 11 flags.go:64] FLAG: --udp-timeout=\"250ms\"\nI0622 16:11:43.071067 11 flags.go:64] FLAG: --v=\"2\"\nI0622 16:11:43.071073 11 flags.go:64] FLAG: --version=\"false\"\nI0622 16:11:43.071083 11 flags.go:64] FLAG: --vmodule=\"\"\nI0622 16:11:43.071091 11 flags.go:64] FLAG: --write-config-to=\"\"\nI0622 16:11:43.071110 11 server.go:231] \"Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP\"\nI0622 16:11:43.071321 11 feature_gate.go:245] feature gates: &{map[]}\nI0622 16:11:43.072000 11 feature_gate.go:245] feature gates: &{map[]}\nI0622 16:11:43.146250 11 node.go:163] Successfully retrieved node IP: 10.0.16.2\nI0622 16:11:43.146310 11 server_others.go:138] \"Detected node IP\" address=\"10.0.16.2\"\nI0622 16:11:43.146351 11 server_others.go:578] \"Unknown proxy mode, assuming iptables proxy\" proxyMode=\"\"\nI0622 16:11:43.146477 11 server_others.go:175] \"DetectLocalMode\" LocalMode=\"ClusterCIDR\"\nI0622 16:11:43.189719 11 server_others.go:206] \"Using iptables Proxier\"\nI0622 16:11:43.189761 11 server_others.go:213] \"kube-proxy running in dual-stack mode\" ipFamily=IPv4\nI0622 16:11:43.189776 11 server_others.go:214] \"Creating dualStackProxier for iptables\"\nI0622 16:11:43.189898 11 server_others.go:501] \"Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\"\nI0622 16:11:43.189934 11 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0622 16:11:43.190038 11 utils.go:431] \"Changed sysctl\" name=\"net/ipv4/conf/all/route_localnet\" before=0 after=1\nI0622 16:11:43.190102 11 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0622 16:11:43.190157 11 proxier.go:319] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0622 16:11:43.190208 11 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv4\nI0622 16:11:43.190222 11 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0622 16:11:43.190284 11 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0622 16:11:43.190322 11 proxier.go:319] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0622 16:11:43.190354 11 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv6\nI0622 16:11:43.190514 11 server.go:661] \"Version info\" version=\"v1.25.0-alpha.1\"\nI0622 16:11:43.190528 11 server.go:663] \"Golang settings\" GOGC=\"\" GOMAXPROCS=\"\" GOTRACEBACK=\"\"\nI0622 16:11:43.191867 11 conntrack.go:52] \"Setting nf_conntrack_max\" nf_conntrack_max=262144\nI0622 16:11:43.191968 11 conntrack.go:100] \"Set sysctl\" entry=\"net/netfilter/nf_conntrack_tcp_timeout_close_wait\" value=3600\nI0622 16:11:43.192508 11 config.go:317] \"Starting service config controller\"\nI0622 16:11:43.192537 11 shared_informer.go:255] Waiting for caches to sync for service config\nI0622 16:11:43.192566 11 config.go:226] \"Starting endpoint slice config controller\"\nI0622 16:11:43.192700 11 shared_informer.go:255] Waiting for caches to sync for endpoint slice config\nI0622 16:11:43.193307 11 config.go:444] \"Starting node config controller\"\nI0622 16:11:43.193328 11 shared_informer.go:255] Waiting for caches to sync for node config\nI0622 16:11:43.198598 11 service.go:322] \"Service updated ports\" service=\"kube-system/kube-dns\" portCount=3\nI0622 16:11:43.198639 11 service.go:322] \"Service updated ports\" service=\"default/kubernetes\" portCount=1\nI0622 16:11:43.199029 11 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 16:11:43.199041 11 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 16:11:43.293231 11 shared_informer.go:262] Caches are synced for service config\nI0622 16:11:43.293231 11 shared_informer.go:262] Caches are synced for endpoint slice config\nI0622 16:11:43.293444 11 shared_informer.go:262] Caches are synced for node config\nI0622 16:11:43.293414 11 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 16:11:43.293499 11 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 16:11:43.293689 11 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns\" servicePort=\"100.64.0.10:53/UDP\"\nI0622 16:11:43.293740 11 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns-tcp\" servicePort=\"100.64.0.10:53/TCP\"\nI0622 16:11:43.293762 11 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:metrics\" servicePort=\"100.64.0.10:9153/TCP\"\nI0622 16:11:43.293776 11 service.go:437] \"Adding new service port\" portName=\"default/kubernetes:https\" servicePort=\"100.64.0.1:443/TCP\"\nI0622 16:11:43.293820 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:11:43.344635 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 16:11:43.369105 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"75.567802ms\"\nI0622 16:11:43.369143 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:11:43.414048 11 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 16:11:43.416347 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.202509ms\"\nI0622 16:11:45.847239 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:11:45.886995 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 16:11:45.894435 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.203665ms\"\nI0622 16:11:45.894560 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:11:45.932637 11 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 16:11:45.935113 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.553794ms\"\nI0622 16:11:50.310269 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:11:50.347642 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 16:11:50.352763 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.504188ms\"\nI0622 16:11:50.352824 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:11:50.386893 11 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 16:11:50.389566 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.738136ms\"\nI0622 16:12:02.181571 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:12:02.215041 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=4 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 16:12:02.219497 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.984978ms\"\nI0622 16:12:02.506884 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:12:02.541868 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 16:12:02.546343 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.560074ms\"\nI0622 16:12:03.188967 11 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI0622 16:12:03.188996 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:12:03.225519 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=12 numNATRules=25\nI0622 16:12:03.232617 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.802481ms\"\nI0622 16:12:04.233303 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:12:04.267089 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:12:04.277160 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.92825ms\"\nI0622 16:15:33.372171 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:33.406603 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:15:33.411409 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.279124ms\"\nI0622 16:15:39.483556 11 service.go:322] \"Service updated ports\" service=\"pods-5379/fooservice\" portCount=1\nI0622 16:15:39.483613 11 service.go:437] \"Adding new service port\" portName=\"pods-5379/fooservice\" servicePort=\"100.68.229.199:8765/TCP\"\nI0622 16:15:39.483634 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:39.525008 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:15:39.530275 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.668619ms\"\nI0622 16:15:39.530352 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:39.569023 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:15:39.574759 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.441904ms\"\nI0622 16:15:41.010589 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:41.057655 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:15:41.063762 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.203024ms\"\nI0622 16:15:42.064042 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:42.103832 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:15:42.110647 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.697986ms\"\nI0622 16:15:42.721486 11 service.go:322] \"Service updated ports\" service=\"conntrack-6270/svc-udp\" portCount=1\nI0622 16:15:42.721543 11 service.go:437] \"Adding new service port\" portName=\"conntrack-6270/svc-udp:udp\" servicePort=\"100.64.200.158:80/UDP\"\nI0622 16:15:42.721571 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:42.761462 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:15:42.767619 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.079532ms\"\nI0622 16:15:43.768119 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:43.818439 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:15:43.824228 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.176522ms\"\nI0622 16:15:45.606689 11 service.go:322] \"Service updated ports\" service=\"services-1794/service-headless-toggled\" portCount=1\nI0622 16:15:45.606749 11 service.go:437] \"Adding new service port\" portName=\"services-1794/service-headless-toggled\" servicePort=\"100.67.119.37:80/TCP\"\nI0622 16:15:45.606772 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:45.664609 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=39\nI0622 16:15:45.671563 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.816811ms\"\nI0622 16:15:45.671617 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:45.734631 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=39\nI0622 16:15:45.741823 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.214797ms\"\nI0622 16:15:48.727209 11 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-6270/svc-udp:udp\" clusterIP=\"100.64.200.158\"\nI0622 16:15:48.727309 11 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-6270/svc-udp:udp\" nodePort=32018\nI0622 16:15:48.727321 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:48.787418 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 16:15:48.805802 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"78.691062ms\"\nI0622 16:15:49.117810 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:49.212718 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=44\nI0622 16:15:49.219627 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"101.883755ms\"\nI0622 16:15:49.321683 11 service.go:322] \"Service updated ports\" service=\"pods-5379/fooservice\" portCount=0\nI0622 16:15:50.135675 11 service.go:462] \"Removing service port\" portName=\"pods-5379/fooservice\"\nI0622 16:15:50.135751 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:50.174609 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:15:50.180693 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.022806ms\"\nI0622 16:15:50.180832 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:50.219122 11 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 16:15:50.220502 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:50.221913 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.077043ms\"\nI0622 16:15:50.262375 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:15:50.268792 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.312835ms\"\nI0622 16:15:56.522873 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:56.565143 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=53\nI0622 16:15:56.570946 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.141357ms\"\nI0622 16:15:56.753910 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:56.815151 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=51\nI0622 16:15:56.835307 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"81.48169ms\"\nI0622 16:15:57.836471 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:57.872565 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:15:57.877913 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.513493ms\"\nI0622 16:16:00.289680 11 service.go:322] \"Service updated ports\" service=\"apply-4693/test-svc\" portCount=1\nI0622 16:16:00.289737 11 service.go:437] \"Adding new service port\" portName=\"apply-4693/test-svc\" servicePort=\"100.65.152.156:8080/UDP\"\nI0622 16:16:00.289766 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:00.334009 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=50\nI0622 16:16:00.341342 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.606495ms\"\nI0622 16:16:01.706662 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:01.745651 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:16:01.751535 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.9548ms\"\nI0622 16:16:03.016487 11 service.go:322] \"Service updated ports\" service=\"dns-5264/test-service-2\" portCount=1\nI0622 16:16:03.016823 11 service.go:437] \"Adding new service port\" portName=\"dns-5264/test-service-2:http\" servicePort=\"100.70.81.69:80/TCP\"\nI0622 16:16:03.017041 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:03.056204 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 16:16:03.061841 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.297433ms\"\nI0622 16:16:03.061906 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:03.098397 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 16:16:03.105704 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.821251ms\"\nI0622 16:16:05.587339 11 service.go:322] \"Service updated ports\" service=\"apply-4693/test-svc\" portCount=0\nI0622 16:16:05.587420 11 service.go:462] \"Removing service port\" portName=\"apply-4693/test-svc\"\nI0622 16:16:05.587455 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:05.630372 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:16:05.639558 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.134963ms\"\nI0622 16:16:12.237332 11 service.go:322] \"Service updated ports\" service=\"conntrack-6270/svc-udp\" portCount=0\nI0622 16:16:12.237386 11 service.go:462] \"Removing service port\" portName=\"conntrack-6270/svc-udp:udp\"\nI0622 16:16:12.237420 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:12.289549 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=48\nI0622 16:16:12.298612 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.22175ms\"\nI0622 16:16:12.298690 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:12.339545 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 16:16:12.350273 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.61751ms\"\nI0622 16:16:13.350840 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:13.392197 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:16:13.397897 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.176754ms\"\nI0622 16:16:22.802942 11 service.go:322] \"Service updated ports\" service=\"services-1794/service-headless-toggled\" portCount=0\nI0622 16:16:22.802997 11 service.go:462] \"Removing service port\" portName=\"services-1794/service-headless-toggled\"\nI0622 16:16:22.803028 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:22.839860 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=43\nI0622 16:16:22.846312 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.312419ms\"\nI0622 16:16:29.703526 11 service.go:322] \"Service updated ports\" service=\"services-1794/service-headless-toggled\" portCount=1\nI0622 16:16:29.703579 11 service.go:437] \"Adding new service port\" portName=\"services-1794/service-headless-toggled\" servicePort=\"100.67.119.37:80/TCP\"\nI0622 16:16:29.703606 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:29.743823 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:16:29.749144 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.569949ms\"\nI0622 16:16:40.659663 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:40.700376 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=47\nI0622 16:16:40.705814 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.211521ms\"\nI0622 16:16:40.706096 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:40.710865 11 service.go:322] \"Service updated ports\" service=\"dns-5264/test-service-2\" portCount=0\nI0622 16:16:40.743371 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 16:16:40.749077 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.224528ms\"\nI0622 16:16:41.749325 11 service.go:462] \"Removing service port\" portName=\"dns-5264/test-service-2:http\"\nI0622 16:16:41.749384 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:41.802298 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=45\nI0622 16:16:41.807779 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.483234ms\"\nI0622 16:16:48.158706 11 service.go:322] \"Service updated ports\" service=\"services-1076/externalname-service\" portCount=1\nI0622 16:16:48.158772 11 service.go:437] \"Adding new service port\" portName=\"services-1076/externalname-service:http\" servicePort=\"100.70.187.19:80/TCP\"\nI0622 16:16:48.158800 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:48.194315 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 16:16:48.199621 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.857331ms\"\nI0622 16:16:49.291681 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:49.352897 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:16:49.364427 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"72.793527ms\"\nI0622 16:16:51.761909 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:51.820904 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=53\nI0622 16:16:51.827046 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.203986ms\"\nI0622 16:16:56.676663 11 service.go:322] \"Service updated ports\" service=\"dns-7433/dns-test-service-3\" portCount=1\nI0622 16:16:56.676725 11 service.go:437] \"Adding new service port\" portName=\"dns-7433/dns-test-service-3:http\" servicePort=\"100.68.250.227:80/TCP\"\nI0622 16:16:56.676758 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:56.721264 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:16:56.728084 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.361971ms\"\nI0622 16:16:56.812100 11 service.go:322] \"Service updated ports\" service=\"services-8585/tolerate-unready\" portCount=1\nI0622 16:16:56.812149 11 service.go:437] \"Adding new service port\" portName=\"services-8585/tolerate-unready:http\" servicePort=\"100.69.135.109:80/TCP\"\nI0622 16:16:56.812179 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:56.863662 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 16:16:56.870612 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.465981ms\"\nI0622 16:16:57.871725 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:57.914095 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 16:16:57.920594 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.928971ms\"\nI0622 16:16:58.426712 11 service.go:322] \"Service updated ports\" service=\"services-1794/service-headless-toggled\" portCount=0\nI0622 16:16:58.921471 11 service.go:462] \"Removing service port\" portName=\"services-1794/service-headless-toggled\"\nI0622 16:16:58.921584 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:58.962570 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=46\nI0622 16:16:58.968608 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.165767ms\"\nI0622 16:17:01.545551 11 service.go:322] \"Service updated ports\" service=\"services-1069/clusterip-service\" portCount=1\nI0622 16:17:01.545613 11 service.go:437] \"Adding new service port\" portName=\"services-1069/clusterip-service\" servicePort=\"100.67.88.217:80/TCP\"\nI0622 16:17:01.545640 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:01.599661 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=18 numNATRules=42\nI0622 16:17:01.605855 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.242492ms\"\nI0622 16:17:01.605938 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:01.624905 11 service.go:322] \"Service updated ports\" service=\"services-1069/externalsvc\" portCount=1\nI0622 16:17:01.647040 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=18 numNATRules=42\nI0622 16:17:01.654269 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.357089ms\"\nI0622 16:17:02.342025 11 service.go:322] \"Service updated ports\" service=\"services-1076/externalname-service\" portCount=0\nI0622 16:17:02.655086 11 service.go:437] \"Adding new service port\" portName=\"services-1069/externalsvc\" servicePort=\"100.68.82.182:80/TCP\"\nI0622 16:17:02.655126 11 service.go:462] \"Removing service port\" portName=\"services-1076/externalname-service:http\"\nI0622 16:17:02.655318 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:02.701133 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=18 numNATRules=37\nI0622 16:17:02.708492 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.434815ms\"\nI0622 16:17:03.708794 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:03.780962 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=39\nI0622 16:17:03.792307 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"83.603552ms\"\nI0622 16:17:05.818796 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:05.862613 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=44\nI0622 16:17:05.868994 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.238023ms\"\nI0622 16:17:06.818509 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:06.858375 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=47\nI0622 16:17:06.863758 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.299148ms\"\nI0622 16:17:07.862279 11 service.go:322] \"Service updated ports\" service=\"services-1069/clusterip-service\" portCount=0\nI0622 16:17:07.862341 11 service.go:462] \"Removing service port\" portName=\"services-1069/clusterip-service\"\nI0622 16:17:07.862371 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:07.900786 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 16:17:07.906724 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.381303ms\"\nI0622 16:17:14.235914 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:14.291517 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 16:17:14.297600 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.739367ms\"\nI0622 16:17:16.823919 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:16.872701 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=42\nI0622 16:17:16.880452 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.57194ms\"\nI0622 16:17:17.220363 11 service.go:322] \"Service updated ports\" service=\"services-1178/endpoint-test2\" portCount=1\nI0622 16:17:17.220411 11 service.go:437] \"Adding new service port\" portName=\"services-1178/endpoint-test2\" servicePort=\"100.70.128.117:80/TCP\"\nI0622 16:17:17.220440 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:17.259158 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=39\nI0622 16:17:17.265258 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.846801ms\"\nI0622 16:17:17.571418 11 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-int-port\" portCount=1\nI0622 16:17:17.630282 11 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-named-port\" portCount=1\nI0622 16:17:17.684299 11 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-no-match\" portCount=1\nI0622 16:17:18.265952 11 service.go:437] \"Adding new service port\" portName=\"endpointslice-5318/example-int-port:example\" servicePort=\"100.67.57.248:80/TCP\"\nI0622 16:17:18.266014 11 service.go:437] \"Adding new service port\" portName=\"endpointslice-5318/example-named-port:http\" servicePort=\"100.71.216.157:80/TCP\"\nI0622 16:17:18.266036 11 service.go:437] \"Adding new service port\" portName=\"endpointslice-5318/example-no-match:example-no-match\" servicePort=\"100.66.17.13:80/TCP\"\nI0622 16:17:18.266083 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:18.302932 11 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=10 numFilterChains=4 numFilterRules=9 numNATChains=17 numNATRules=39\nI0622 16:17:18.309101 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.206339ms\"\nI0622 16:17:19.128281 11 service.go:322] \"Service updated ports\" service=\"dns-7433/dns-test-service-3\" portCount=0\nI0622 16:17:19.128323 11 service.go:462] \"Removing service port\" portName=\"dns-7433/dns-test-service-3:http\"\nI0622 16:17:19.128355 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:19.175887 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=10 numFilterChains=4 numFilterRules=8 numNATChains=17 numNATRules=39\nI0622 16:17:19.185064 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.733746ms\"\nI0622 16:17:20.185397 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:20.235428 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=10 numFilterChains=4 numFilterRules=9 numNATChains=17 numNATRules=36\nI0622 16:17:20.241764 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.439587ms\"\nI0622 16:17:20.888684 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:20.936883 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=11 numFilterChains=4 numFilterRules=8 numNATChains=17 numNATRules=39\nI0622 16:17:20.944265 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.624235ms\"\nI0622 16:17:21.945259 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:21.992405 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=7 numNATChains=19 numNATRules=44\nI0622 16:17:21.998727 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.540009ms\"\nI0622 16:17:22.998999 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:23.044253 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=49\nI0622 16:17:23.050996 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.105549ms\"\nI0622 16:17:23.551138 11 service.go:322] \"Service updated ports\" service=\"services-8585/tolerate-unready\" portCount=0\nI0622 16:17:24.052045 11 service.go:462] \"Removing service port\" portName=\"services-8585/tolerate-unready:http\"\nI0622 16:17:24.052171 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:24.090286 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=46\nI0622 16:17:24.096565 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.537752ms\"\nI0622 16:17:27.013885 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:27.062532 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=44\nI0622 16:17:27.069283 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.439574ms\"\nI0622 16:17:27.210441 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:27.261247 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=44\nI0622 16:17:27.266671 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.299729ms\"\nI0622 16:17:27.328782 11 service.go:322] \"Service updated ports\" service=\"services-1069/externalsvc\" portCount=0\nI0622 16:17:28.266837 11 service.go:462] \"Removing service port\" portName=\"services-1069/externalsvc\"\nI0622 16:17:28.266978 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:28.316910 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=44\nI0622 16:17:28.324318 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.53944ms\"\nI0622 16:17:29.324784 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:29.364815 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=52\nI0622 16:17:29.370575 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.944978ms\"\nI0622 16:17:30.470703 11 service.go:322] \"Service updated ports\" service=\"webhook-2919/e2e-test-webhook\" portCount=1\nI0622 16:17:30.470760 11 service.go:437] \"Adding new service port\" portName=\"webhook-2919/e2e-test-webhook\" servicePort=\"100.66.137.215:8443/TCP\"\nI0622 16:17:30.470787 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:30.510878 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=52\nI0622 16:17:30.516542 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.785373ms\"\nI0622 16:17:31.517754 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:31.558597 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=57\nI0622 16:17:31.566936 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.238312ms\"\nI0622 16:17:35.589499 11 service.go:322] \"Service updated ports\" service=\"kubectl-9993/agnhost-primary\" portCount=1\nI0622 16:17:35.589555 11 service.go:437] \"Adding new service port\" portName=\"kubectl-9993/agnhost-primary\" servicePort=\"100.66.220.73:6379/TCP\"\nI0622 16:17:35.589586 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:35.635713 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=57\nI0622 16:17:35.642449 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.895577ms\"\nI0622 16:17:35.642522 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:35.686611 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=57\nI0622 16:17:35.693159 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.656903ms\"\nI0622 16:17:37.004623 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:37.043201 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=60\nI0622 16:17:37.049154 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.586586ms\"\nI0622 16:17:40.371415 11 service.go:322] \"Service updated ports\" service=\"webhook-2919/e2e-test-webhook\" portCount=0\nI0622 16:17:40.371458 11 service.go:462] \"Removing service port\" portName=\"webhook-2919/e2e-test-webhook\"\nI0622 16:17:40.371488 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:40.410361 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=57\nI0622 16:17:40.416913 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.45162ms\"\nI0622 16:17:40.417433 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:40.477551 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=55\nI0622 16:17:40.489984 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"72.581054ms\"\nI0622 16:17:41.274646 11 service.go:322] \"Service updated ports\" service=\"crd-webhook-6908/e2e-test-crd-conversion-webhook\" portCount=1\nI0622 16:17:41.490199 11 service.go:437] \"Adding new service port\" portName=\"crd-webhook-6908/e2e-test-crd-conversion-webhook\" servicePort=\"100.66.97.249:9443/TCP\"\nI0622 16:17:41.490300 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:41.530259 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=58\nI0622 16:17:41.536263 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.112524ms\"\nI0622 16:17:43.144406 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:43.189007 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=24 numNATRules=54\nI0622 16:17:43.196280 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.921999ms\"\nI0622 16:17:43.331706 11 service.go:322] \"Service updated ports\" service=\"services-1178/endpoint-test2\" portCount=0\nI0622 16:17:44.156242 11 service.go:462] \"Removing service port\" portName=\"services-1178/endpoint-test2\"\nI0622 16:17:44.156330 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:44.226876 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=57\nI0622 16:17:44.243083 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"86.848749ms\"\nI0622 16:17:45.243266 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:45.308563 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=57\nI0622 16:17:45.319356 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"76.144082ms\"\nI0622 16:17:45.947631 11 service.go:322] \"Service updated ports\" service=\"crd-webhook-6908/e2e-test-crd-conversion-webhook\" portCount=0\nI0622 16:17:45.947701 11 service.go:462] \"Removing service port\" portName=\"crd-webhook-6908/e2e-test-crd-conversion-webhook\"\nI0622 16:17:45.947738 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:45.986594 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=54\nI0622 16:17:45.995603 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.919253ms\"\nI0622 16:17:46.995838 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:47.034680 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=52\nI0622 16:17:47.040812 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.052614ms\"\nI0622 16:17:48.152447 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:48.209299 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=50\nI0622 16:17:48.216469 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.096396ms\"\nI0622 16:17:49.179845 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:49.235531 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=50\nI0622 16:17:49.250678 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.861487ms\"\nI0622 16:17:50.252464 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:50.306579 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=52\nI0622 16:17:50.313831 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.507435ms\"\nI0622 16:17:52.128164 11 service.go:322] \"Service updated ports\" service=\"kubectl-9993/agnhost-primary\" portCount=0\nI0622 16:17:52.128209 11 service.go:462] \"Removing service port\" portName=\"kubectl-9993/agnhost-primary\"\nI0622 16:17:52.128238 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:52.196249 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=49\nI0622 16:17:52.203823 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"75.607121ms\"\nI0622 16:17:52.203904 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:52.261110 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 16:17:52.277517 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"73.648447ms\"\nI0622 16:18:03.516378 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:18:03.566931 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 16:18:03.578594 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.315173ms\"\nI0622 16:18:03.579077 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:18:03.647361 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=20 numNATRules=39\nI0622 16:18:03.655374 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"76.712331ms\"\nI0622 16:18:03.774633 11 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-int-port\" portCount=0\nI0622 16:18:03.789212 11 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-named-port\" portCount=0\nI0622 16:18:03.813592 11 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-no-match\" portCount=0\nI0622 16:18:04.655586 11 service.go:462] \"Removing service port\" portName=\"endpointslice-5318/example-int-port:example\"\nI0622 16:18:04.655621 11 service.go:462] \"Removing service port\" portName=\"endpointslice-5318/example-named-port:http\"\nI0622 16:18:04.655634 11 service.go:462] \"Removing service port\" portName=\"endpointslice-5318/example-no-match:example-no-match\"\nI0622 16:18:04.655659 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:18:04.692445 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:18:04.697640 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.0848ms\"\nI0622 16:18:45.591533 11 service.go:322] \"Service updated ports\" service=\"webhook-7868/e2e-test-webhook\" portCount=1\nI0622 16:18:45.591648 11 service.go:437] \"Adding new service port\" portName=\"webhook-7868/e2e-test-webhook\" servicePort=\"100.68.35.10:8443/TCP\"\nI0622 16:18:45.591698 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:18:45.652606 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:18:45.658891 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"67.290766ms\"\nI0622 16:18:45.658983 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:18:45.753840 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:18:45.760603 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"101.665583ms\"\nI0622 16:18:59.966581 11 service.go:322] \"Service updated ports\" service=\"webhook-7868/e2e-test-webhook\" portCount=0\nI0622 16:18:59.966631 11 service.go:462] \"Removing service port\" portName=\"webhook-7868/e2e-test-webhook\"\nI0622 16:18:59.966663 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:00.066070 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 16:19:00.072250 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"105.558188ms\"\nI0622 16:19:00.072449 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:00.146106 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:19:00.152593 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"80.170531ms\"\nI0622 16:19:34.503431 11 service.go:322] \"Service updated ports\" service=\"services-1711/nodeport-reuse\" portCount=1\nI0622 16:19:34.503598 11 service.go:437] \"Adding new service port\" portName=\"services-1711/nodeport-reuse\" servicePort=\"100.71.159.219:80/TCP\"\nI0622 16:19:34.503632 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:34.542265 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:19:34.549129 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.639235ms\"\nI0622 16:19:34.549182 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:34.556488 11 service.go:322] \"Service updated ports\" service=\"services-1711/nodeport-reuse\" portCount=0\nI0622 16:19:34.587930 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:19:34.593935 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.768832ms\"\nI0622 16:19:35.594079 11 service.go:462] \"Removing service port\" portName=\"services-1711/nodeport-reuse\"\nI0622 16:19:35.594140 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:35.648492 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:19:35.654880 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.817409ms\"\nI0622 16:19:43.334454 11 service.go:322] \"Service updated ports\" service=\"services-1711/nodeport-reuse\" portCount=1\nI0622 16:19:43.334504 11 service.go:437] \"Adding new service port\" portName=\"services-1711/nodeport-reuse\" servicePort=\"100.67.196.227:80/TCP\"\nI0622 16:19:43.334530 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:43.378168 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:19:43.383287 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.785229ms\"\nI0622 16:19:43.383342 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:43.402277 11 service.go:322] \"Service updated ports\" service=\"services-1711/nodeport-reuse\" portCount=0\nI0622 16:19:43.421287 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:19:43.426421 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.097551ms\"\nI0622 16:19:44.426877 11 service.go:462] \"Removing service port\" portName=\"services-1711/nodeport-reuse\"\nI0622 16:19:44.426934 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:44.467440 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:19:44.472472 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.617128ms\"\nI0622 16:19:45.990698 11 service.go:322] \"Service updated ports\" service=\"webhook-8615/e2e-test-webhook\" portCount=1\nI0622 16:19:45.990752 11 service.go:437] \"Adding new service port\" portName=\"webhook-8615/e2e-test-webhook\" servicePort=\"100.69.164.10:8443/TCP\"\nI0622 16:19:45.990779 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:46.050461 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:19:46.057576 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.816857ms\"\nI0622 16:19:47.057805 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:47.116108 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:19:47.121907 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.178975ms\"\nI0622 16:19:48.092619 11 service.go:322] \"Service updated ports\" service=\"webhook-8615/e2e-test-webhook\" portCount=0\nI0622 16:19:48.092673 11 service.go:462] \"Removing service port\" portName=\"webhook-8615/e2e-test-webhook\"\nI0622 16:19:48.092711 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:48.140063 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 16:19:48.147315 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.640411ms\"\nI0622 16:19:48.792435 11 service.go:322] \"Service updated ports\" service=\"proxy-9511/proxy-service-rd7fv\" portCount=4\nI0622 16:19:48.792498 11 service.go:437] \"Adding new service port\" portName=\"proxy-9511/proxy-service-rd7fv:portname2\" servicePort=\"100.68.222.99:81/TCP\"\nI0622 16:19:48.792513 11 service.go:437] \"Adding new service port\" portName=\"proxy-9511/proxy-service-rd7fv:tlsportname1\" servicePort=\"100.68.222.99:443/TCP\"\nI0622 16:19:48.792526 11 service.go:437] \"Adding new service port\" portName=\"proxy-9511/proxy-service-rd7fv:tlsportname2\" servicePort=\"100.68.222.99:444/TCP\"\nI0622 16:19:48.792538 11 service.go:437] \"Adding new service port\" portName=\"proxy-9511/proxy-service-rd7fv:portname1\" servicePort=\"100.68.222.99:80/TCP\"\nI0622 16:19:48.792587 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:48.841643 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 16:19:48.847772 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.276864ms\"\nI0622 16:19:49.848473 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:49.915840 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 16:19:49.921626 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"73.182786ms\"\nI0622 16:19:53.157414 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:53.198499 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=54\nI0622 16:19:53.203972 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.626639ms\"\nI0622 16:19:55.686744 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:55.742135 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=23 numNATRules=42\nI0622 16:19:55.755860 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.255757ms\"\nI0622 16:19:56.112934 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:56.163007 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 16:19:56.168401 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.458833ms\"\nI0622 16:19:57.168686 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:57.209132 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 16:19:57.214376 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.769685ms\"\nI0622 16:20:01.524053 11 service.go:322] \"Service updated ports\" service=\"proxy-9511/proxy-service-rd7fv\" portCount=0\nI0622 16:20:01.524229 11 service.go:462] \"Removing service port\" portName=\"proxy-9511/proxy-service-rd7fv:tlsportname1\"\nI0622 16:20:01.524264 11 service.go:462] \"Removing service port\" portName=\"proxy-9511/proxy-service-rd7fv:tlsportname2\"\nI0622 16:20:01.524275 11 service.go:462] \"Removing service port\" portName=\"proxy-9511/proxy-service-rd7fv:portname1\"\nI0622 16:20:01.524285 11 service.go:462] \"Removing service port\" portName=\"proxy-9511/proxy-service-rd7fv:portname2\"\nI0622 16:20:01.524332 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:01.560081 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:20:01.564994 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.882755ms\"\nI0622 16:20:01.565049 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:01.603803 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:20:01.609450 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.416112ms\"\nI0622 16:20:03.671536 11 service.go:322] \"Service updated ports\" service=\"webhook-6578/e2e-test-webhook\" portCount=1\nI0622 16:20:03.671590 11 service.go:437] \"Adding new service port\" portName=\"webhook-6578/e2e-test-webhook\" servicePort=\"100.70.172.70:8443/TCP\"\nI0622 16:20:03.671615 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:03.712093 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:20:03.718624 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.033054ms\"\nI0622 16:20:03.718790 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:03.756477 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:20:03.762697 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.027517ms\"\nI0622 16:20:06.383438 11 service.go:322] \"Service updated ports\" service=\"proxy-208/test-service\" portCount=1\nI0622 16:20:06.383492 11 service.go:437] \"Adding new service port\" portName=\"proxy-208/test-service\" servicePort=\"100.67.180.37:80/TCP\"\nI0622 16:20:06.383519 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:06.422213 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 16:20:06.428319 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.828807ms\"\nI0622 16:20:06.428481 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:06.466003 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=44\nI0622 16:20:06.471935 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.571349ms\"\nI0622 16:20:08.271431 11 service.go:322] \"Service updated ports\" service=\"webhook-6578/e2e-test-webhook\" portCount=0\nI0622 16:20:08.271478 11 service.go:462] \"Removing service port\" portName=\"webhook-6578/e2e-test-webhook\"\nI0622 16:20:08.271508 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:08.326492 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=41\nI0622 16:20:08.332393 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.909608ms\"\nI0622 16:20:09.333503 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:09.372688 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:20:09.377969 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.645655ms\"\nI0622 16:20:12.278030 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:12.321420 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=36\nI0622 16:20:12.327263 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.251517ms\"\nI0622 16:20:12.394515 11 service.go:322] \"Service updated ports\" service=\"proxy-208/test-service\" portCount=0\nI0622 16:20:12.394563 11 service.go:462] \"Removing service port\" portName=\"proxy-208/test-service\"\nI0622 16:20:12.394612 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:12.436811 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:20:12.441763 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.196277ms\"\nI0622 16:20:13.442848 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:13.487369 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:20:13.493089 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.309298ms\"\nI0622 16:20:18.324983 11 service.go:322] \"Service updated ports\" service=\"services-9196/sourceip-test\" portCount=1\nI0622 16:20:18.325046 11 service.go:437] \"Adding new service port\" portName=\"services-9196/sourceip-test\" servicePort=\"100.65.96.7:8080/TCP\"\nI0622 16:20:18.325073 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:18.364105 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:20:18.369285 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.242168ms\"\nI0622 16:20:18.369342 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:18.408064 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:20:18.413977 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.652476ms\"\nI0622 16:20:22.227355 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:22.280139 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:20:22.285646 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.336956ms\"\nI0622 16:20:23.180929 11 service.go:322] \"Service updated ports\" service=\"services-7105/nodeport-test\" portCount=1\nI0622 16:20:23.180987 11 service.go:437] \"Adding new service port\" portName=\"services-7105/nodeport-test:http\" servicePort=\"100.69.202.26:80/TCP\"\nI0622 16:20:23.181020 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:23.221450 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:20:23.226970 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.989148ms\"\nI0622 16:20:24.227993 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:24.266848 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:20:24.273037 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.075829ms\"\nI0622 16:20:29.577141 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:29.625876 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=47\nI0622 16:20:29.634445 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.361572ms\"\nI0622 16:20:31.022965 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:31.064328 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:20:31.070695 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.786304ms\"\nI0622 16:20:32.240075 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:32.279553 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=47\nI0622 16:20:32.284935 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.908592ms\"\nI0622 16:20:32.316348 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:32.346992 11 service.go:322] \"Service updated ports\" service=\"services-9196/sourceip-test\" portCount=0\nI0622 16:20:32.356017 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 16:20:32.361273 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.966131ms\"\nI0622 16:20:33.362265 11 service.go:462] \"Removing service port\" portName=\"services-9196/sourceip-test\"\nI0622 16:20:33.362330 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:33.403469 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=45\nI0622 16:20:33.409208 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.982616ms\"\nI0622 16:20:36.425039 11 service.go:322] \"Service updated ports\" service=\"services-5310/externalip-test\" portCount=1\nI0622 16:20:36.425110 11 service.go:437] \"Adding new service port\" portName=\"services-5310/externalip-test:http\" servicePort=\"100.64.101.155:80/TCP\"\nI0622 16:20:36.425296 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:36.469402 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=45\nI0622 16:20:36.475771 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.676855ms\"\nI0622 16:20:36.475843 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:36.514782 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=45\nI0622 16:20:36.520471 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.660357ms\"\nI0622 16:20:39.840084 11 service.go:322] \"Service updated ports\" service=\"dns-8541/test-service-2\" portCount=1\nI0622 16:20:39.840149 11 service.go:437] \"Adding new service port\" portName=\"dns-8541/test-service-2:http\" servicePort=\"100.70.203.19:80/TCP\"\nI0622 16:20:39.840180 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:39.879913 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=45\nI0622 16:20:39.885840 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.695148ms\"\nI0622 16:20:39.885899 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:39.929756 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=45\nI0622 16:20:39.936746 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.8635ms\"\nI0622 16:20:41.183732 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:41.224235 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:20:41.230097 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.410129ms\"\nI0622 16:20:45.020472 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:45.079859 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 16:20:45.108230 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"87.921443ms\"\nI0622 16:20:45.630424 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:45.699563 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=61\nI0622 16:20:45.706159 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"75.777427ms\"\nI0622 16:20:50.930511 11 service.go:322] \"Service updated ports\" service=\"aggregator-3267/sample-api\" portCount=1\nI0622 16:20:50.930638 11 service.go:437] \"Adding new service port\" portName=\"aggregator-3267/sample-api\" servicePort=\"100.64.173.141:7443/TCP\"\nI0622 16:20:50.930693 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:50.973981 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=61\nI0622 16:20:50.980824 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.236595ms\"\nI0622 16:20:50.980892 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:51.023679 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=61\nI0622 16:20:51.030678 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.805737ms\"\nI0622 16:20:51.641392 11 service.go:322] \"Service updated ports\" service=\"webhook-2904/e2e-test-webhook\" portCount=1\nI0622 16:20:52.030876 11 service.go:437] \"Adding new service port\" portName=\"webhook-2904/e2e-test-webhook\" servicePort=\"100.64.246.253:8443/TCP\"\nI0622 16:20:52.030959 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:52.071801 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=27 numNATRules=66\nI0622 16:20:52.078780 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.951556ms\"\nI0622 16:20:52.616291 11 service.go:322] \"Service updated ports\" service=\"conntrack-8394/svc-udp\" portCount=1\nI0622 16:20:53.079700 11 service.go:437] \"Adding new service port\" portName=\"conntrack-8394/svc-udp:udp\" servicePort=\"100.65.216.237:80/UDP\"\nI0622 16:20:53.079757 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:53.120891 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=27 numNATRules=66\nI0622 16:20:53.126775 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.106337ms\"\nI0622 16:20:54.853043 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:54.901801 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=27 numNATRules=64\nI0622 16:20:54.908243 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.262532ms\"\nI0622 16:20:54.932429 11 service.go:322] \"Service updated ports\" service=\"services-7105/nodeport-test\" portCount=0\nI0622 16:20:54.932481 11 service.go:462] \"Removing service port\" portName=\"services-7105/nodeport-test:http\"\nI0622 16:20:54.932582 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:54.984923 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=58\nI0622 16:20:54.997311 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.823688ms\"\nI0622 16:20:55.919916 11 service.go:322] \"Service updated ports\" service=\"webhook-2904/e2e-test-webhook\" portCount=0\nI0622 16:20:55.965575 11 service.go:462] \"Removing service port\" portName=\"webhook-2904/e2e-test-webhook\"\nI0622 16:20:55.965756 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:56.010323 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=52\nI0622 16:20:56.016849 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.285873ms\"\nI0622 16:20:57.017115 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:57.059855 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=50\nI0622 16:20:57.066322 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.239969ms\"\nI0622 16:20:59.889230 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:59.935477 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=48\nI0622 16:20:59.941905 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.742875ms\"\nI0622 16:20:59.942074 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:20:59.987843 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=20 numNATRules=42\nI0622 16:20:59.993445 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.46304ms\"\nI0622 16:21:00.096046 11 service.go:322] \"Service updated ports\" service=\"services-5310/externalip-test\" portCount=0\nI0622 16:21:00.993904 11 service.go:462] \"Removing service port\" portName=\"services-5310/externalip-test:http\"\nI0622 16:21:00.993995 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:01.035068 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:21:01.040600 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.721747ms\"\nI0622 16:21:02.467122 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:02.514737 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:21:02.521181 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.138805ms\"\nI0622 16:21:08.188115 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:08.229542 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=44\nI0622 16:21:08.237197 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.130684ms\"\nI0622 16:21:08.657121 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hbd52\" portCount=1\nI0622 16:21:08.657220 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-hbd52\" servicePort=\"100.69.100.40:80/TCP\"\nI0622 16:21:08.657569 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:08.699513 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=44\nI0622 16:21:08.706624 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.403832ms\"\nI0622 16:21:08.809188 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-89892\" portCount=1\nI0622 16:21:08.830109 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2hlzp\" portCount=1\nI0622 16:21:08.841936 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2lcpb\" portCount=1\nI0622 16:21:08.862447 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-qq6rq\" portCount=1\nI0622 16:21:08.885147 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-khb2m\" portCount=1\nI0622 16:21:08.893241 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-l4cwq\" portCount=1\nI0622 16:21:08.919496 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pkhrb\" portCount=1\nI0622 16:21:08.937222 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-n2sgb\" portCount=1\nI0622 16:21:08.948829 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-rhf9x\" portCount=1\nI0622 16:21:08.983332 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-lcnqq\" portCount=1\nI0622 16:21:09.022440 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-cprt2\" portCount=1\nI0622 16:21:09.023601 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-fv46k\" portCount=1\nI0622 16:21:09.047905 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xtsvp\" portCount=1\nI0622 16:21:09.088395 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-q8wgz\" portCount=1\nI0622 16:21:09.092958 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xb26c\" portCount=1\nI0622 16:21:09.114617 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pb4rj\" portCount=1\nI0622 16:21:09.135715 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9sc5t\" portCount=1\nI0622 16:21:09.158185 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5lxg5\" portCount=1\nI0622 16:21:09.175684 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-mgr9x\" portCount=1\nI0622 16:21:09.192918 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vxrqz\" portCount=1\nI0622 16:21:09.192983 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-xb26c\" servicePort=\"100.71.102.230:80/TCP\"\nI0622 16:21:09.193000 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-pb4rj\" servicePort=\"100.65.171.171:80/TCP\"\nI0622 16:21:09.193012 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-qq6rq\" servicePort=\"100.66.122.247:80/TCP\"\nI0622 16:21:09.193026 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-l4cwq\" servicePort=\"100.68.23.212:80/TCP\"\nI0622 16:21:09.193040 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-lcnqq\" servicePort=\"100.69.11.20:80/TCP\"\nI0622 16:21:09.193061 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-q8wgz\" servicePort=\"100.69.72.140:80/TCP\"\nI0622 16:21:09.193087 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-9sc5t\" servicePort=\"100.69.67.250:80/TCP\"\nI0622 16:21:09.193105 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-5lxg5\" servicePort=\"100.70.122.145:80/TCP\"\nI0622 16:21:09.193118 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-mgr9x\" servicePort=\"100.68.231.255:80/TCP\"\nI0622 16:21:09.193130 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-2lcpb\" servicePort=\"100.71.53.27:80/TCP\"\nI0622 16:21:09.193144 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-khb2m\" servicePort=\"100.71.19.158:80/TCP\"\nI0622 16:21:09.193157 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-n2sgb\" servicePort=\"100.68.225.126:80/TCP\"\nI0622 16:21:09.193176 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-89892\" servicePort=\"100.70.48.187:80/TCP\"\nI0622 16:21:09.193192 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-cprt2\" servicePort=\"100.71.87.79:80/TCP\"\nI0622 16:21:09.193205 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-vxrqz\" servicePort=\"100.68.33.136:80/TCP\"\nI0622 16:21:09.193218 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-fv46k\" servicePort=\"100.65.94.17:80/TCP\"\nI0622 16:21:09.193233 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-xtsvp\" servicePort=\"100.71.128.183:80/TCP\"\nI0622 16:21:09.193253 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-2hlzp\" servicePort=\"100.69.58.77:80/TCP\"\nI0622 16:21:09.193269 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-pkhrb\" servicePort=\"100.71.203.116:80/TCP\"\nI0622 16:21:09.193292 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-rhf9x\" servicePort=\"100.69.142.100:80/TCP\"\nI0622 16:21:09.193559 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:09.233808 11 proxier.go:1461] \"Reloading service iptables data\" numServices=28 numEndpoints=29 numFilterChains=4 numFilterRules=6 numNATChains=57 numNATRules=139\nI0622 16:21:09.242463 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.485853ms\"\nI0622 16:21:09.245523 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-4gjgv\" portCount=1\nI0622 16:21:09.279712 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-z2l2x\" portCount=1\nI0622 16:21:09.289919 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8lq2l\" portCount=1\nI0622 16:21:09.310562 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-m9vcq\" portCount=1\nI0622 16:21:09.500143 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vzzl9\" portCount=1\nI0622 16:21:09.505819 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vxbkz\" portCount=1\nI0622 16:21:09.511311 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9zfsq\" portCount=1\nI0622 16:21:09.532915 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-d4jt8\" portCount=1\nI0622 16:21:09.534627 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ljz6g\" portCount=1\nI0622 16:21:09.534933 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8f6gw\" portCount=1\nI0622 16:21:09.542888 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-4zmxr\" portCount=1\nI0622 16:21:09.543910 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-wzmzn\" portCount=1\nI0622 16:21:09.547543 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8r9bd\" portCount=1\nI0622 16:21:09.549838 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-gbq87\" portCount=1\nI0622 16:21:09.552903 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-892lq\" portCount=1\nI0622 16:21:09.553947 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-tqpq2\" portCount=1\nI0622 16:21:09.554663 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-dh5fs\" portCount=1\nI0622 16:21:09.555863 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pmfw6\" portCount=1\nI0622 16:21:09.556343 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-c46b6\" portCount=1\nI0622 16:21:09.664081 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-qqmj6\" portCount=1\nI0622 16:21:09.676998 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xsx5h\" portCount=1\nI0622 16:21:09.702212 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-x2lkw\" portCount=1\nI0622 16:21:09.707129 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-b4684\" portCount=1\nI0622 16:21:09.728576 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-cqxj2\" portCount=1\nI0622 16:21:09.739718 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ftv94\" portCount=1\nI0622 16:21:09.773735 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-mqktd\" portCount=1\nI0622 16:21:09.799856 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ggb4d\" portCount=1\nI0622 16:21:09.831071 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vccdv\" portCount=1\nI0622 16:21:09.862880 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hhbx8\" portCount=1\nI0622 16:21:09.882066 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-tdrz2\" portCount=1\nI0622 16:21:09.988066 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nt9nr\" portCount=1\nI0622 16:21:09.989886 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-b6pq6\" portCount=1\nI0622 16:21:09.990859 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hcs2s\" portCount=1\nI0622 16:21:09.992236 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-fz69j\" portCount=1\nI0622 16:21:10.029856 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2ptpx\" portCount=1\nI0622 16:21:10.034960 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-lj7l9\" portCount=1\nI0622 16:21:10.038247 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hbnp5\" portCount=1\nI0622 16:21:10.042588 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-wvvll\" portCount=1\nI0622 16:21:10.044100 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-jvtg2\" portCount=1\nI0622 16:21:10.047392 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-t29jr\" portCount=1\nI0622 16:21:10.048952 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hjcn9\" portCount=1\nI0622 16:21:10.049702 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-fz266\" portCount=1\nI0622 16:21:10.050415 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-v5fq4\" portCount=1\nI0622 16:21:10.051350 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-x6hs5\" portCount=1\nI0622 16:21:10.052051 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xhpm7\" portCount=1\nI0622 16:21:10.056595 11 service.go:322] \"Service updated ports\" service=\"services-1523/svc-not-tolerate-unready\" portCount=1\nI0622 16:21:10.093755 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-kclls\" portCount=1\nI0622 16:21:10.112479 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-dvmph\" portCount=1\nI0622 16:21:10.138349 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nwsbp\" portCount=1\nI0622 16:21:10.145959 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-6npgk\" portCount=1\nI0622 16:21:10.153032 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ldprv\" portCount=1\nI0622 16:21:10.214471 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2lfbc\" portCount=1\nI0622 16:21:10.214562 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-fz266\" servicePort=\"100.67.49.20:80/TCP\"\nI0622 16:21:10.214581 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-x6hs5\" servicePort=\"100.69.179.250:80/TCP\"\nI0622 16:21:10.214598 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-4zmxr\" servicePort=\"100.66.124.144:80/TCP\"\nI0622 16:21:10.214611 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-8r9bd\" servicePort=\"100.65.150.32:80/TCP\"\nI0622 16:21:10.214625 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-dh5fs\" servicePort=\"100.64.165.109:80/TCP\"\nI0622 16:21:10.214639 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-lj7l9\" servicePort=\"100.70.66.220:80/TCP\"\nI0622 16:21:10.214654 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-b4684\" servicePort=\"100.69.190.149:80/TCP\"\nI0622 16:21:10.214669 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-ftv94\" servicePort=\"100.70.204.31:80/TCP\"\nI0622 16:21:10.214685 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-t29jr\" servicePort=\"100.71.93.72:80/TCP\"\nI0622 16:21:10.214740 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-z2l2x\" servicePort=\"100.66.178.105:80/TCP\"\nI0622 16:21:10.214802 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-8lq2l\" servicePort=\"100.66.247.177:80/TCP\"\nI0622 16:21:10.214846 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-m9vcq\" servicePort=\"100.66.226.67:80/TCP\"\nI0622 16:21:10.214898 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-qqmj6\" servicePort=\"100.67.234.206:80/TCP\"\nI0622 16:21:10.214924 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-6npgk\" servicePort=\"100.68.6.50:80/TCP\"\nI0622 16:21:10.214939 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-wzmzn\" servicePort=\"100.71.205.179:80/TCP\"\nI0622 16:21:10.214954 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-2ptpx\" servicePort=\"100.64.90.80:80/TCP\"\nI0622 16:21:10.214969 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-hbnp5\" servicePort=\"100.67.175.29:80/TCP\"\nI0622 16:21:10.214984 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-jvtg2\" servicePort=\"100.70.96.88:80/TCP\"\nI0622 16:21:10.214998 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-hjcn9\" servicePort=\"100.69.80.26:80/TCP\"\nI0622 16:21:10.215019 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-xhpm7\" servicePort=\"100.64.122.4:80/TCP\"\nI0622 16:21:10.215043 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-dvmph\" servicePort=\"100.68.84.195:80/TCP\"\nI0622 16:21:10.215059 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-nwsbp\" servicePort=\"100.65.173.112:80/TCP\"\nI0622 16:21:10.215074 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-8f6gw\" servicePort=\"100.67.242.221:80/TCP\"\nI0622 16:21:10.215161 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-tqpq2\" servicePort=\"100.67.110.221:80/TCP\"\nI0622 16:21:10.215191 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-cqxj2\" servicePort=\"100.71.145.28:80/TCP\"\nI0622 16:21:10.215217 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-tdrz2\" servicePort=\"100.66.27.88:80/TCP\"\nI0622 16:21:10.215233 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-2lfbc\" servicePort=\"100.70.223.114:80/TCP\"\nI0622 16:21:10.215396 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-vccdv\" servicePort=\"100.65.173.79:80/TCP\"\nI0622 16:21:10.215428 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-nt9nr\" servicePort=\"100.64.155.38:80/TCP\"\nI0622 16:21:10.215445 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-v5fq4\" servicePort=\"100.68.157.40:80/TCP\"\nI0622 16:21:10.215472 11 service.go:437] \"Adding new service port\" portName=\"services-1523/svc-not-tolerate-unready:http\" servicePort=\"100.69.94.95:80/TCP\"\nI0622 16:21:10.215495 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-4gjgv\" servicePort=\"100.64.86.106:80/TCP\"\nI0622 16:21:10.215510 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-ljz6g\" servicePort=\"100.65.155.163:80/TCP\"\nI0622 16:21:10.215524 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-xsx5h\" servicePort=\"100.68.162.152:80/TCP\"\nI0622 16:21:10.215538 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-x2lkw\" servicePort=\"100.69.243.67:80/TCP\"\nI0622 16:21:10.215555 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-kclls\" servicePort=\"100.66.51.194:80/TCP\"\nI0622 16:21:10.215578 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-b6pq6\" servicePort=\"100.70.116.180:80/TCP\"\nI0622 16:21:10.215599 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-hcs2s\" servicePort=\"100.69.11.208:80/TCP\"\nI0622 16:21:10.215613 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-d4jt8\" servicePort=\"100.71.55.24:80/TCP\"\nI0622 16:21:10.215627 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-pmfw6\" servicePort=\"100.67.165.46:80/TCP\"\nI0622 16:21:10.215643 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-c46b6\" servicePort=\"100.65.67.35:80/TCP\"\nI0622 16:21:10.215659 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-ggb4d\" servicePort=\"100.65.156.74:80/TCP\"\nI0622 16:21:10.215707 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-vxbkz\" servicePort=\"100.70.176.249:80/TCP\"\nI0622 16:21:10.215753 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-9zfsq\" servicePort=\"100.65.54.126:80/TCP\"\nI0622 16:21:10.215776 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-892lq\" servicePort=\"100.66.230.101:80/TCP\"\nI0622 16:21:10.215793 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-hhbx8\" servicePort=\"100.66.10.227:80/TCP\"\nI0622 16:21:10.215811 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-wvvll\" servicePort=\"100.70.223.192:80/TCP\"\nI0622 16:21:10.215825 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-ldprv\" servicePort=\"100.66.12.132:80/TCP\"\nI0622 16:21:10.215842 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-vzzl9\" servicePort=\"100.69.241.30:80/TCP\"\nI0622 16:21:10.215862 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-gbq87\" servicePort=\"100.69.142.29:80/TCP\"\nI0622 16:21:10.215885 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-mqktd\" servicePort=\"100.66.250.17:80/TCP\"\nI0622 16:21:10.215899 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-fz69j\" servicePort=\"100.70.152.155:80/TCP\"\nI0622 16:21:10.216374 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:10.258090 11 proxier.go:1461] \"Reloading service iptables data\" numServices=80 numEndpoints=68 numFilterChains=4 numFilterRules=20 numNATChains=135 numNATRules=334\nI0622 16:21:10.259445 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bkhsh\" portCount=1\nI0622 16:21:10.271429 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.89908ms\"\nI0622 16:21:10.433880 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-d99v7\" portCount=1\nI0622 16:21:10.525263 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-g4b7k\" portCount=1\nI0622 16:21:10.688993 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zthb5\" portCount=1\nI0622 16:21:10.731378 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bbbcv\" portCount=1\nI0622 16:21:10.784774 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nnxpn\" portCount=1\nI0622 16:21:10.799424 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bhlf6\" portCount=1\nI0622 16:21:10.838008 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-f2bkd\" portCount=1\nI0622 16:21:10.857961 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-mjwx7\" portCount=1\nI0622 16:21:10.872881 11 service.go:322] \"Service updated ports\" service=\"aggregator-3267/sample-api\" portCount=0\nI0622 16:21:10.897448 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nrqmr\" portCount=1\nI0622 16:21:10.919435 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-qckb6\" portCount=1\nI0622 16:21:10.984121 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-rwmsj\" portCount=1\nI0622 16:21:11.010248 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-47l7c\" portCount=1\nI0622 16:21:11.011699 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5nm4n\" portCount=1\nI0622 16:21:11.013548 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-rrfmn\" portCount=1\nI0622 16:21:11.040993 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-jljmz\" portCount=1\nI0622 16:21:11.083724 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5wkj6\" portCount=1\nI0622 16:21:11.132230 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-swtbt\" portCount=1\nI0622 16:21:11.194655 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-d8lt8\" portCount=1\nI0622 16:21:11.194717 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-bbbcv\" servicePort=\"100.71.252.243:80/TCP\"\nI0622 16:21:11.194827 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-47l7c\" servicePort=\"100.68.32.139:80/TCP\"\nI0622 16:21:11.194937 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-5nm4n\" servicePort=\"100.70.206.46:80/TCP\"\nI0622 16:21:11.195021 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-rrfmn\" servicePort=\"100.70.164.185:80/TCP\"\nI0622 16:21:11.195111 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-jljmz\" servicePort=\"100.66.164.206:80/TCP\"\nI0622 16:21:11.195201 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-swtbt\" servicePort=\"100.68.244.64:80/TCP\"\nI0622 16:21:11.195314 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-bhlf6\" servicePort=\"100.68.220.242:80/TCP\"\nI0622 16:21:11.195433 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-f2bkd\" servicePort=\"100.65.103.68:80/TCP\"\nI0622 16:21:11.195529 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-qckb6\" servicePort=\"100.69.180.98:80/TCP\"\nI0622 16:21:11.195608 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-rwmsj\" servicePort=\"100.65.108.151:80/TCP\"\nI0622 16:21:11.195693 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-5wkj6\" servicePort=\"100.67.159.154:80/TCP\"\nI0622 16:21:11.195782 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-d99v7\" servicePort=\"100.68.240.11:80/TCP\"\nI0622 16:21:11.195867 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-nnxpn\" servicePort=\"100.64.128.21:80/TCP\"\nI0622 16:21:11.195992 11 service.go:462] \"Removing service port\" portName=\"aggregator-3267/sample-api\"\nI0622 16:21:11.196083 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-bkhsh\" servicePort=\"100.69.28.98:80/TCP\"\nI0622 16:21:11.196163 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-g4b7k\" servicePort=\"100.67.124.133:80/TCP\"\nI0622 16:21:11.196227 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-zthb5\" servicePort=\"100.67.2.100:80/TCP\"\nI0622 16:21:11.196291 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-mjwx7\" servicePort=\"100.71.122.239:80/TCP\"\nI0622 16:21:11.196374 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-nrqmr\" servicePort=\"100.69.162.105:80/TCP\"\nI0622 16:21:11.196448 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-d8lt8\" servicePort=\"100.65.228.168:80/TCP\"\nI0622 16:21:11.196799 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:11.252266 11 proxier.go:1461] \"Reloading service iptables data\" numServices=98 numEndpoints=86 numFilterChains=4 numFilterRules=20 numNATChains=173 numNATRules=426\nI0622 16:21:11.268299 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"73.58493ms\"\nI0622 16:21:11.295375 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-tzf4s\" portCount=1\nI0622 16:21:11.332091 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hx44q\" portCount=1\nI0622 16:21:11.384447 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-r654r\" portCount=1\nI0622 16:21:11.401938 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9mw6k\" portCount=1\nI0622 16:21:11.442602 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-jpn9j\" portCount=1\nI0622 16:21:11.549766 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hrgzv\" portCount=1\nI0622 16:21:11.643234 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-7wqxw\" portCount=1\nI0622 16:21:11.668587 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-7b252\" portCount=1\nI0622 16:21:11.758686 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hwphl\" portCount=1\nI0622 16:21:11.788898 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-mk5qp\" portCount=1\nI0622 16:21:11.837414 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-md4fl\" portCount=1\nI0622 16:21:11.908712 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-k4n8b\" portCount=1\nI0622 16:21:11.987552 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xlglv\" portCount=1\nI0622 16:21:12.021672 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bdcvw\" portCount=1\nI0622 16:21:12.037576 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-d8hkz\" portCount=1\nI0622 16:21:12.194126 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-r654r\" servicePort=\"100.66.137.73:80/TCP\"\nI0622 16:21:12.194202 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-jpn9j\" servicePort=\"100.71.124.180:80/TCP\"\nI0622 16:21:12.194249 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-7wqxw\" servicePort=\"100.68.89.13:80/TCP\"\nI0622 16:21:12.194445 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-hwphl\" servicePort=\"100.70.159.224:80/TCP\"\nI0622 16:21:12.194565 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-bdcvw\" servicePort=\"100.70.56.28:80/TCP\"\nI0622 16:21:12.194640 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-hx44q\" servicePort=\"100.70.81.59:80/TCP\"\nI0622 16:21:12.194728 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-mk5qp\" servicePort=\"100.69.126.149:80/TCP\"\nI0622 16:21:12.194818 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-md4fl\" servicePort=\"100.71.14.193:80/TCP\"\nI0622 16:21:12.194904 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-tzf4s\" servicePort=\"100.65.64.74:80/TCP\"\nI0622 16:21:12.194976 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-k4n8b\" servicePort=\"100.70.97.102:80/TCP\"\nI0622 16:21:12.195039 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-xlglv\" servicePort=\"100.67.8.227:80/TCP\"\nI0622 16:21:12.195111 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-d8hkz\" servicePort=\"100.66.58.202:80/TCP\"\nI0622 16:21:12.195180 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-9mw6k\" servicePort=\"100.71.233.156:80/TCP\"\nI0622 16:21:12.195264 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-hrgzv\" servicePort=\"100.71.56.181:80/TCP\"\nI0622 16:21:12.195333 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-7b252\" servicePort=\"100.68.159.43:80/TCP\"\nI0622 16:21:12.195717 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:12.210894 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zhjxw\" portCount=1\nI0622 16:21:12.250402 11 proxier.go:1461] \"Reloading service iptables data\" numServices=113 numEndpoints=106 numFilterChains=4 numFilterRules=15 numNATChains=211 numNATRules=524\nI0622 16:21:12.262514 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-7krtw\" portCount=1\nI0622 16:21:12.271751 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"77.639713ms\"\nI0622 16:21:12.414845 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-6c8tw\" portCount=1\nI0622 16:21:12.508693 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xh7mb\" portCount=1\nI0622 16:21:12.539469 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-h678z\" portCount=1\nI0622 16:21:12.569665 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pskgv\" portCount=1\nI0622 16:21:12.678087 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ndfpx\" portCount=1\nI0622 16:21:12.691619 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bsvhw\" portCount=1\nI0622 16:21:12.774787 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-4bcrf\" portCount=1\nI0622 16:21:12.809393 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-kkn6w\" portCount=1\nI0622 16:21:12.888348 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ct8fn\" portCount=1\nI0622 16:21:12.903850 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zln4x\" portCount=1\nI0622 16:21:12.937977 11 service.go:322] \"Service updated ports\" service=\"endpointslice-4134/example-empty-selector\" portCount=1\nI0622 16:21:12.963684 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2lkrp\" portCount=1\nI0622 16:21:12.987053 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-w756m\" portCount=1\nI0622 16:21:13.001826 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-j7rxk\" portCount=1\nI0622 16:21:13.030351 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5pnrz\" portCount=1\nI0622 16:21:13.045321 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-cwv9v\" portCount=1\nI0622 16:21:13.060392 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-k5ngj\" portCount=1\nI0622 16:21:13.090926 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-tdr9w\" portCount=1\nI0622 16:21:13.119270 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pqtbj\" portCount=1\nI0622 16:21:13.134373 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8vr6w\" portCount=1\nI0622 16:21:13.160279 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-6s5z9\" portCount=1\nI0622 16:21:13.193007 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-cwv9v\" servicePort=\"100.68.63.244:80/TCP\"\nI0622 16:21:13.193046 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-6s5z9\" servicePort=\"100.68.18.207:80/TCP\"\nI0622 16:21:13.193064 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-xh7mb\" servicePort=\"100.70.233.45:80/TCP\"\nI0622 16:21:13.193108 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-ndfpx\" servicePort=\"100.64.36.60:80/TCP\"\nI0622 16:21:13.193126 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-bsvhw\" servicePort=\"100.69.255.137:80/TCP\"\nI0622 16:21:13.193141 11 service.go:437] \"Adding new service port\" portName=\"endpointslice-4134/example-empty-selector:example\" servicePort=\"100.70.196.41:80/TCP\"\nI0622 16:21:13.193179 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-2lkrp\" servicePort=\"100.69.79.235:80/TCP\"\nI0622 16:21:13.193196 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-5pnrz\" servicePort=\"100.65.42.35:80/TCP\"\nI0622 16:21:13.193212 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-8vr6w\" servicePort=\"100.69.44.254:80/TCP\"\nI0622 16:21:13.193228 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-h678z\" servicePort=\"100.65.141.130:80/TCP\"\nI0622 16:21:13.193312 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-4bcrf\" servicePort=\"100.71.61.120:80/TCP\"\nI0622 16:21:13.193331 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-ct8fn\" servicePort=\"100.69.115.185:80/TCP\"\nI0622 16:21:13.193344 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-j7rxk\" servicePort=\"100.68.151.77:80/TCP\"\nI0622 16:21:13.193356 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-tdr9w\" servicePort=\"100.69.51.176:80/TCP\"\nI0622 16:21:13.193433 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-pqtbj\" servicePort=\"100.67.176.177:80/TCP\"\nI0622 16:21:13.193454 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-zhjxw\" servicePort=\"100.71.130.214:80/TCP\"\nI0622 16:21:13.193469 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-zln4x\" servicePort=\"100.71.103.152:80/TCP\"\nI0622 16:21:13.193486 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-7krtw\" servicePort=\"100.69.6.43:80/TCP\"\nI0622 16:21:13.193586 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-6c8tw\" servicePort=\"100.70.88.42:80/TCP\"\nI0622 16:21:13.193617 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-pskgv\" servicePort=\"100.70.45.3:80/TCP\"\nI0622 16:21:13.193698 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-kkn6w\" servicePort=\"100.67.153.244:80/TCP\"\nI0622 16:21:13.193719 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-w756m\" servicePort=\"100.65.249.113:80/TCP\"\nI0622 16:21:13.193734 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-k5ngj\" servicePort=\"100.66.145.86:80/TCP\"\nI0622 16:21:13.194074 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:13.215726 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-g6b8k\" portCount=1\nI0622 16:21:13.249171 11 proxier.go:1461] \"Reloading service iptables data\" numServices=136 numEndpoints=126 numFilterChains=4 numFilterRules=19 numNATChains=249 numNATRules=619\nI0622 16:21:13.268657 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"75.671444ms\"\nI0622 16:21:13.308248 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nwj75\" portCount=1\nI0622 16:21:13.348982 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-sbl5g\" portCount=1\nI0622 16:21:13.356033 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vvq8p\" portCount=1\nI0622 16:21:13.439444 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-c5ntq\" portCount=1\nI0622 16:21:13.533940 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ch4j8\" portCount=1\nI0622 16:21:13.586736 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nx8rl\" portCount=1\nI0622 16:21:13.628317 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-lwrn5\" portCount=1\nI0622 16:21:13.688035 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ptb47\" portCount=1\nI0622 16:21:13.728819 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8rrjv\" portCount=1\nI0622 16:21:13.795416 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-kmnbg\" portCount=1\nI0622 16:21:13.829045 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-6kcw4\" portCount=1\nI0622 16:21:13.880168 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-shr59\" portCount=1\nI0622 16:21:13.932982 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-jzblk\" portCount=1\nI0622 16:21:13.976743 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8v2mv\" portCount=1\nI0622 16:21:14.032447 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9nsx6\" portCount=1\nI0622 16:21:14.073668 11 service.go:322] \"Service updated ports\" service=\"webhook-2917/e2e-test-webhook\" portCount=1\nI0622 16:21:14.084215 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-f6rf2\" portCount=1\nI0622 16:21:14.129893 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bsm99\" portCount=1\nI0622 16:21:14.174482 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-kcqwf\" portCount=1\nI0622 16:21:14.220474 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-ch4j8\" servicePort=\"100.68.85.174:80/TCP\"\nI0622 16:21:14.220632 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-g6b8k\" servicePort=\"100.67.89.236:80/TCP\"\nI0622 16:21:14.220707 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-vvq8p\" servicePort=\"100.65.62.185:80/TCP\"\nI0622 16:21:14.220762 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-c5ntq\" servicePort=\"100.66.53.193:80/TCP\"\nI0622 16:21:14.220806 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-kmnbg\" servicePort=\"100.67.234.159:80/TCP\"\nI0622 16:21:14.220875 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-6kcw4\" servicePort=\"100.67.170.126:80/TCP\"\nI0622 16:21:14.220925 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-8v2mv\" servicePort=\"100.65.251.245:80/TCP\"\nI0622 16:21:14.220962 11 service.go:437] \"Adding new service port\" portName=\"webhook-2917/e2e-test-webhook\" servicePort=\"100.66.156.56:8443/TCP\"\nI0622 16:21:14.221014 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-f6rf2\" servicePort=\"100.66.19.117:80/TCP\"\nI0622 16:21:14.221064 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-nwj75\" servicePort=\"100.69.116.80:80/TCP\"\nI0622 16:21:14.221100 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-nx8rl\" servicePort=\"100.69.33.66:80/TCP\"\nI0622 16:21:14.221155 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-8rrjv\" servicePort=\"100.69.139.113:80/TCP\"\nI0622 16:21:14.221207 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-ptb47\" servicePort=\"100.64.119.94:80/TCP\"\nI0622 16:21:14.221244 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-shr59\" servicePort=\"100.69.38.94:80/TCP\"\nI0622 16:21:14.221296 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-9nsx6\" servicePort=\"100.64.14.81:80/TCP\"\nI0622 16:21:14.221346 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-bsm99\" servicePort=\"100.70.189.50:80/TCP\"\nI0622 16:21:14.221385 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-kcqwf\" servicePort=\"100.70.250.108:80/TCP\"\nI0622 16:21:14.221437 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-sbl5g\" servicePort=\"100.64.57.167:80/TCP\"\nI0622 16:21:14.221489 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-lwrn5\" servicePort=\"100.68.139.157:80/TCP\"\nI0622 16:21:14.221530 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-jzblk\" servicePort=\"100.67.28.96:80/TCP\"\nI0622 16:21:14.221997 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:14.232410 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ms7rk\" portCount=1\nI0622 16:21:14.275195 11 proxier.go:1461] \"Reloading service iptables data\" numServices=156 numEndpoints=145 numFilterChains=4 numFilterRules=18 numNATChains=290 numNATRules=722\nI0622 16:21:14.287470 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-cxcw5\" portCount=1\nI0622 16:21:14.296688 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"76.239117ms\"\nI0622 16:21:14.346046 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9bzz8\" portCount=1\nI0622 16:21:14.388496 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vs5tv\" portCount=1\nI0622 16:21:14.428119 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-m9jvv\" portCount=1\nI0622 16:21:14.484319 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-cd8gq\" portCount=1\nI0622 16:21:14.550261 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bwtxp\" portCount=1\nI0622 16:21:14.601533 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-scpzx\" portCount=1\nI0622 16:21:14.643668 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-slggb\" portCount=1\nI0622 16:21:14.693488 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-wmrl6\" portCount=1\nI0622 16:21:14.760551 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-mrxnx\" portCount=1\nI0622 16:21:14.805911 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-58rkp\" portCount=1\nI0622 16:21:14.879110 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-dr42j\" portCount=1\nI0622 16:21:14.936792 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-htpg7\" portCount=1\nI0622 16:21:14.989071 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-t9h8t\" portCount=1\nI0622 16:21:15.042199 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-w9m7q\" portCount=1\nI0622 16:21:15.087916 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-4npr5\" portCount=1\nI0622 16:21:15.130630 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8x2gm\" portCount=1\nI0622 16:21:15.149644 11 service.go:322] \"Service updated ports\" service=\"endpointslice-4134/example-empty-selector\" portCount=0\nI0622 16:21:15.180930 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-k7lcx\" portCount=1\nI0622 16:21:15.217081 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-bwtxp\" servicePort=\"100.69.65.192:80/TCP\"\nI0622 16:21:15.217130 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-scpzx\" servicePort=\"100.64.66.127:80/TCP\"\nI0622 16:21:15.217370 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-t9h8t\" servicePort=\"100.70.30.250:80/TCP\"\nI0622 16:21:15.217424 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-w9m7q\" servicePort=\"100.64.114.179:80/TCP\"\nI0622 16:21:15.217450 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-9bzz8\" servicePort=\"100.68.85.46:80/TCP\"\nI0622 16:21:15.217511 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-vs5tv\" servicePort=\"100.67.140.173:80/TCP\"\nI0622 16:21:15.217535 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-cd8gq\" servicePort=\"100.68.211.112:80/TCP\"\nI0622 16:21:15.217550 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-m9jvv\" servicePort=\"100.68.114.131:80/TCP\"\nI0622 16:21:15.217586 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-htpg7\" servicePort=\"100.70.7.190:80/TCP\"\nI0622 16:21:15.217607 11 service.go:462] \"Removing service port\" portName=\"endpointslice-4134/example-empty-selector:example\"\nI0622 16:21:15.217628 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-8x2gm\" servicePort=\"100.69.75.8:80/TCP\"\nI0622 16:21:15.217667 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-k7lcx\" servicePort=\"100.64.249.130:80/TCP\"\nI0622 16:21:15.217683 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-slggb\" servicePort=\"100.69.79.156:80/TCP\"\nI0622 16:21:15.217700 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-mrxnx\" servicePort=\"100.68.54.224:80/TCP\"\nI0622 16:21:15.217714 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-4npr5\" servicePort=\"100.70.52.140:80/TCP\"\nI0622 16:21:15.217752 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-58rkp\" servicePort=\"100.67.4.253:80/TCP\"\nI0622 16:21:15.217766 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-dr42j\" servicePort=\"100.66.129.184:80/TCP\"\nI0622 16:21:15.217789 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-ms7rk\" servicePort=\"100.68.179.27:80/TCP\"\nI0622 16:21:15.217803 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-cxcw5\" servicePort=\"100.64.219.9:80/TCP\"\nI0622 16:21:15.217842 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-wmrl6\" servicePort=\"100.64.155.240:80/TCP\"\nI0622 16:21:15.218493 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:15.235645 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nzjlk\" portCount=1\nI0622 16:21:15.274752 11 proxier.go:1461] \"Reloading service iptables data\" numServices=174 numEndpoints=165 numFilterChains=4 numFilterRules=16 numNATChains=330 numNATRules=822\nI0622 16:21:15.292878 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xpmpf\" portCount=1\nI0622 16:21:15.298929 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"81.869386ms\"\nI0622 16:21:15.332054 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-52mz4\" portCount=1\nI0622 16:21:15.379893 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-6nfm8\" portCount=1\nI0622 16:21:15.429539 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zq2s9\" portCount=1\nI0622 16:21:15.477281 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2sk8q\" portCount=1\nI0622 16:21:15.534004 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-wsdq7\" portCount=1\nI0622 16:21:15.586199 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-rxlz4\" portCount=1\nI0622 16:21:15.635524 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-p64tg\" portCount=1\nI0622 16:21:15.686004 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-299hm\" portCount=1\nI0622 16:21:15.740752 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bvtdg\" portCount=1\nI0622 16:21:15.808134 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9zpb7\" portCount=1\nI0622 16:21:15.847700 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-jxgcg\" portCount=1\nI0622 16:21:15.899977 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-24zcq\" portCount=1\nI0622 16:21:16.014323 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xtvr7\" portCount=1\nI0622 16:21:16.088605 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5v942\" portCount=1\nI0622 16:21:16.148379 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-v9hgz\" portCount=1\nI0622 16:21:16.178603 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-gm6s9\" portCount=1\nI0622 16:21:16.236800 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-nzjlk\" servicePort=\"100.66.127.252:80/TCP\"\nI0622 16:21:16.236834 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-2sk8q\" servicePort=\"100.70.216.168:80/TCP\"\nI0622 16:21:16.236849 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-jxgcg\" servicePort=\"100.64.150.220:80/TCP\"\nI0622 16:21:16.236864 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-24zcq\" servicePort=\"100.67.113.210:80/TCP\"\nI0622 16:21:16.236878 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-xpmpf\" servicePort=\"100.67.108.191:80/TCP\"\nI0622 16:21:16.236892 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-52mz4\" servicePort=\"100.64.249.72:80/TCP\"\nI0622 16:21:16.236926 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-6nfm8\" servicePort=\"100.70.164.74:80/TCP\"\nI0622 16:21:16.236940 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-p64tg\" servicePort=\"100.71.86.133:80/TCP\"\nI0622 16:21:16.236952 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-299hm\" servicePort=\"100.64.186.234:80/TCP\"\nI0622 16:21:16.236998 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-zq2s9\" servicePort=\"100.68.184.28:80/TCP\"\nI0622 16:21:16.237017 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-wsdq7\" servicePort=\"100.71.95.18:80/TCP\"\nI0622 16:21:16.237031 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-9zpb7\" servicePort=\"100.66.163.124:80/TCP\"\nI0622 16:21:16.237044 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-xtvr7\" servicePort=\"100.65.254.109:80/TCP\"\nI0622 16:21:16.237215 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-5v942\" servicePort=\"100.68.36.160:80/TCP\"\nI0622 16:21:16.237264 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-rxlz4\" servicePort=\"100.69.242.169:80/TCP\"\nI0622 16:21:16.237289 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-bvtdg\" servicePort=\"100.69.131.143:80/TCP\"\nI0622 16:21:16.237331 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-v9hgz\" servicePort=\"100.65.160.38:80/TCP\"\nI0622 16:21:16.237353 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-gm6s9\" servicePort=\"100.67.134.166:80/TCP\"\nI0622 16:21:16.237841 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:16.302061 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-98r2w\" portCount=1\nI0622 16:21:16.320723 11 proxier.go:1461] \"Reloading service iptables data\" numServices=192 numEndpoints=185 numFilterChains=4 numFilterRules=14 numNATChains=370 numNATRules=922\nI0622 16:21:16.347082 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"110.296437ms\"\nI0622 16:21:16.380728 11 service.go:322] \"Service updated ports\" service=\"kubectl-9868/rm2\" portCount=1\nI0622 16:21:16.388203 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zz88g\" portCount=1\nI0622 16:21:16.394409 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pgd6r\" portCount=1\nI0622 16:21:16.456378 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nrtkb\" portCount=1\nI0622 16:21:16.469104 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-lzbj5\" portCount=1\nI0622 16:21:16.497320 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-d9fxk\" portCount=1\nI0622 16:21:16.512168 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zqzzn\" portCount=1\nI0622 16:21:16.570296 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-99l7f\" portCount=1\nI0622 16:21:16.592887 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vsv4z\" portCount=1\nI0622 16:21:16.632152 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-gl7cq\" portCount=1\nI0622 16:21:16.695332 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-fv4tc\" portCount=1\nI0622 16:21:16.714621 11 service.go:322] \"Service updated ports\" service=\"webhook-2917/e2e-test-webhook\" portCount=0\nI0622 16:21:16.774175 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5vmgn\" portCount=1\nI0622 16:21:16.829912 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8nvfb\" portCount=1\nI0622 16:21:16.872506 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-sj85d\" portCount=1\nI0622 16:21:16.910466 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zrp86\" portCount=1\nI0622 16:21:16.982652 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-rwtzd\" portCount=1\nI0622 16:21:17.189912 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-fhzwl\" portCount=1\nI0622 16:21:17.190082 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-98r2w\" servicePort=\"100.71.175.65:80/TCP\"\nI0622 16:21:17.190122 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-nrtkb\" servicePort=\"100.69.60.208:80/TCP\"\nI0622 16:21:17.190138 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-8nvfb\" servicePort=\"100.69.254.221:80/TCP\"\nI0622 16:21:17.190153 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-fhzwl\" servicePort=\"100.64.172.227:80/TCP\"\nI0622 16:21:17.190167 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-vsv4z\" servicePort=\"100.66.209.205:80/TCP\"\nI0622 16:21:17.190183 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-gl7cq\" servicePort=\"100.64.112.90:80/TCP\"\nI0622 16:21:17.190199 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-fv4tc\" servicePort=\"100.64.45.2:80/TCP\"\nI0622 16:21:17.190211 11 service.go:462] \"Removing service port\" portName=\"webhook-2917/e2e-test-webhook\"\nI0622 16:21:17.190229 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-sj85d\" servicePort=\"100.70.18.39:80/TCP\"\nI0622 16:21:17.190242 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-zrp86\" servicePort=\"100.64.69.44:80/TCP\"\nI0622 16:21:17.190255 11 service.go:437] \"Adding new service port\" portName=\"kubectl-9868/rm2\" servicePort=\"100.64.128.207:1234/TCP\"\nI0622 16:21:17.190269 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-lzbj5\" servicePort=\"100.68.100.95:80/TCP\"\nI0622 16:21:17.190284 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-d9fxk\" servicePort=\"100.69.214.2:80/TCP\"\nI0622 16:21:17.190302 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-99l7f\" servicePort=\"100.64.94.171:80/TCP\"\nI0622 16:21:17.190322 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-5vmgn\" servicePort=\"100.69.234.46:80/TCP\"\nI0622 16:21:17.190338 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-rwtzd\" servicePort=\"100.64.20.15:80/TCP\"\nI0622 16:21:17.190369 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-zz88g\" servicePort=\"100.66.181.111:80/TCP\"\nI0622 16:21:17.190391 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-pgd6r\" servicePort=\"100.67.158.114:80/TCP\"\nI0622 16:21:17.190405 11 service.go:437] \"Adding new service port\" portName=\"svc-latency-9242/latency-svc-zqzzn\" servicePort=\"100.71.141.129:80/TCP\"\nI0622 16:21:17.190924 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:17.276669 11 proxier.go:1461] \"Reloading service iptables data\" numServices=209 numEndpoints=201 numFilterChains=4 numFilterRules=15 numNATChains=404 numNATRules=1004\nI0622 16:21:17.335982 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"145.913919ms\"\nI0622 16:21:18.199035 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:18.248484 11 service.go:322] \"Service updated ports\" service=\"dns-8541/test-service-2\" portCount=0\nI0622 16:21:18.264153 11 proxier.go:1461] \"Reloading service iptables data\" numServices=209 numEndpoints=212 numFilterChains=4 numFilterRules=5 numNATChains=424 numNATRules=1054\nI0622 16:21:18.291787 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"93.237779ms\"\nI0622 16:21:18.870379 11 service.go:322] \"Service updated ports\" service=\"kubectl-9868/rm3\" portCount=1\nI0622 16:21:19.291971 11 service.go:462] \"Removing service port\" portName=\"dns-8541/test-service-2:http\"\nI0622 16:21:19.292050 11 service.go:437] \"Adding new service port\" portName=\"kubectl-9868/rm3\" servicePort=\"100.64.14.137:2345/TCP\"\nI0622 16:21:19.292447 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:19.377666 11 proxier.go:1461] \"Reloading service iptables data\" numServices=209 numEndpoints=212 numFilterChains=4 numFilterRules=6 numNATChains=424 numNATRules=1052\nI0622 16:21:19.403740 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"111.794516ms\"\nI0622 16:21:21.216251 11 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-8394/svc-udp:udp\" clusterIP=\"100.65.216.237\"\nI0622 16:21:21.216285 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:21.278554 11 proxier.go:1461] \"Reloading service iptables data\" numServices=209 numEndpoints=212 numFilterChains=4 numFilterRules=5 numNATChains=423 numNATRules=1054\nI0622 16:21:21.311715 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"95.824044ms\"\nI0622 16:21:23.087061 11 service.go:322] \"Service updated ports\" service=\"services-8533/nodeport-range-test\" portCount=1\nI0622 16:21:23.087121 11 service.go:437] \"Adding new service port\" portName=\"services-8533/nodeport-range-test\" servicePort=\"100.66.232.37:80/TCP\"\nI0622 16:21:23.087476 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:23.150815 11 proxier.go:1461] \"Reloading service iptables data\" numServices=210 numEndpoints=212 numFilterChains=4 numFilterRules=7 numNATChains=423 numNATRules=1054\nI0622 16:21:23.180981 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"93.862337ms\"\nI0622 16:21:23.181847 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:23.281903 11 proxier.go:1461] \"Reloading service iptables data\" numServices=210 numEndpoints=212 numFilterChains=4 numFilterRules=7 numNATChains=423 numNATRules=1054\nI0622 16:21:23.323370 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"142.301317ms\"\nI0622 16:21:23.368063 11 service.go:322] \"Service updated ports\" service=\"services-8533/nodeport-range-test\" portCount=0\nI0622 16:21:24.324016 11 service.go:462] \"Removing service port\" portName=\"services-8533/nodeport-range-test\"\nI0622 16:21:24.324360 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:24.392108 11 proxier.go:1461] \"Reloading service iptables data\" numServices=209 numEndpoints=212 numFilterChains=4 numFilterRules=5 numNATChains=423 numNATRules=1054\nI0622 16:21:24.420259 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"96.279311ms\"\nI0622 16:21:26.331125 11 service.go:322] \"Service updated ports\" service=\"services-992/hairpin-test\" portCount=1\nI0622 16:21:26.331180 11 service.go:437] \"Adding new service port\" portName=\"services-992/hairpin-test\" servicePort=\"100.65.242.237:8080/TCP\"\nI0622 16:21:26.331548 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:26.432168 11 proxier.go:1461] \"Reloading service iptables data\" numServices=210 numEndpoints=212 numFilterChains=4 numFilterRules=6 numNATChains=423 numNATRules=1054\nI0622 16:21:26.463062 11 service.go:322] \"Service updated ports\" service=\"kubectl-9868/rm2\" portCount=0\nI0622 16:21:26.480853 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"149.670013ms\"\nI0622 16:21:26.480902 11 service.go:462] \"Removing service port\" portName=\"kubectl-9868/rm2\"\nI0622 16:21:26.481233 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:26.565980 11 service.go:322] \"Service updated ports\" service=\"kubectl-9868/rm3\" portCount=0\nI0622 16:21:26.569251 11 proxier.go:1461] \"Reloading service iptables data\" numServices=209 numEndpoints=211 numFilterChains=4 numFilterRules=7 numNATChains=423 numNATRules=1048\nI0622 16:21:26.596914 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"116.010172ms\"\nI0622 16:21:27.597734 11 service.go:462] \"Removing service port\" portName=\"kubectl-9868/rm3\"\nI0622 16:21:27.598201 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:27.688002 11 proxier.go:1461] \"Reloading service iptables data\" numServices=208 numEndpoints=210 numFilterChains=4 numFilterRules=6 numNATChains=419 numNATRules=1044\nI0622 16:21:27.732241 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"134.531905ms\"\nI0622 16:21:28.357262 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:28.438484 11 proxier.go:1461] \"Reloading service iptables data\" numServices=208 numEndpoints=211 numFilterChains=4 numFilterRules=40 numNATChains=421 numNATRules=944\nI0622 16:21:28.483355 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"127.207517ms\"\nI0622 16:21:29.346240 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:29.407540 11 proxier.go:1461] \"Reloading service iptables data\" numServices=208 numEndpoints=19 numFilterChains=4 numFilterRules=199 numNATChains=351 numNATRules=397\nI0622 16:21:29.420507 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"76.399762ms\"\nI0622 16:21:29.555975 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-24zcq\" portCount=0\nI0622 16:21:29.564596 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-299hm\" portCount=0\nI0622 16:21:29.574611 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2hlzp\" portCount=0\nI0622 16:21:29.582805 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2lcpb\" portCount=0\nI0622 16:21:29.591772 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2lfbc\" portCount=0\nI0622 16:21:29.605645 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2lkrp\" portCount=0\nI0622 16:21:29.622675 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2ptpx\" portCount=0\nI0622 16:21:29.631689 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-2sk8q\" portCount=0\nI0622 16:21:29.642055 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-47l7c\" portCount=0\nI0622 16:21:29.653973 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-4bcrf\" portCount=0\nI0622 16:21:29.663858 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-4gjgv\" portCount=0\nI0622 16:21:29.673007 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-4npr5\" portCount=0\nI0622 16:21:29.681290 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-4zmxr\" portCount=0\nI0622 16:21:29.691109 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-52mz4\" portCount=0\nI0622 16:21:29.704015 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-58rkp\" portCount=0\nI0622 16:21:29.718740 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5lxg5\" portCount=0\nI0622 16:21:29.732896 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5nm4n\" portCount=0\nI0622 16:21:29.743396 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5pnrz\" portCount=0\nI0622 16:21:29.769430 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5v942\" portCount=0\nI0622 16:21:29.777345 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5vmgn\" portCount=0\nI0622 16:21:29.786927 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-5wkj6\" portCount=0\nI0622 16:21:29.798216 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-6c8tw\" portCount=0\nI0622 16:21:29.815337 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-6kcw4\" portCount=0\nI0622 16:21:29.823789 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-6nfm8\" portCount=0\nI0622 16:21:29.833605 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-6npgk\" portCount=0\nI0622 16:21:29.842156 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-6s5z9\" portCount=0\nI0622 16:21:29.855672 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-7b252\" portCount=0\nI0622 16:21:29.867046 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-7krtw\" portCount=0\nI0622 16:21:29.884801 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-7wqxw\" portCount=0\nI0622 16:21:29.896114 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-892lq\" portCount=0\nI0622 16:21:29.904737 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-89892\" portCount=0\nI0622 16:21:29.919550 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8f6gw\" portCount=0\nI0622 16:21:29.928466 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8lq2l\" portCount=0\nI0622 16:21:29.936933 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8nvfb\" portCount=0\nI0622 16:21:29.953656 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8r9bd\" portCount=0\nI0622 16:21:29.970869 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8rrjv\" portCount=0\nI0622 16:21:29.982111 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8v2mv\" portCount=0\nI0622 16:21:30.004602 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8vr6w\" portCount=0\nI0622 16:21:30.013907 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-8x2gm\" portCount=0\nI0622 16:21:30.027751 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-98r2w\" portCount=0\nI0622 16:21:30.036503 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-99l7f\" portCount=0\nI0622 16:21:30.052895 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9bzz8\" portCount=0\nI0622 16:21:30.087524 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9mw6k\" portCount=0\nI0622 16:21:30.108511 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9nsx6\" portCount=0\nI0622 16:21:30.119891 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9sc5t\" portCount=0\nI0622 16:21:30.141994 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9zfsq\" portCount=0\nI0622 16:21:30.153299 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-9zpb7\" portCount=0\nI0622 16:21:30.165527 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-b4684\" portCount=0\nI0622 16:21:30.235030 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-b6pq6\" portCount=0\nI0622 16:21:30.272853 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bbbcv\" portCount=0\nI0622 16:21:30.287757 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bdcvw\" portCount=0\nI0622 16:21:30.305001 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bhlf6\" portCount=0\nI0622 16:21:30.319820 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bkhsh\" portCount=0\nI0622 16:21:30.354002 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bsm99\" portCount=0\nI0622 16:21:30.354056 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-24zcq\"\nI0622 16:21:30.354106 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-8f6gw\"\nI0622 16:21:30.354145 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-99l7f\"\nI0622 16:21:30.354193 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-8vr6w\"\nI0622 16:21:30.354207 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-6npgk\"\nI0622 16:21:30.354219 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-8r9bd\"\nI0622 16:21:30.354267 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-2sk8q\"\nI0622 16:21:30.354286 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-7b252\"\nI0622 16:21:30.354300 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-892lq\"\nI0622 16:21:30.354311 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-9bzz8\"\nI0622 16:21:30.354324 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-bdcvw\"\nI0622 16:21:30.354363 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-6nfm8\"\nI0622 16:21:30.354381 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-2lfbc\"\nI0622 16:21:30.354394 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-2lkrp\"\nI0622 16:21:30.354404 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-5nm4n\"\nI0622 16:21:30.354436 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-5wkj6\"\nI0622 16:21:30.354448 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-8nvfb\"\nI0622 16:21:30.354458 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-bkhsh\"\nI0622 16:21:30.354467 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-bbbcv\"\nI0622 16:21:30.354477 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-2ptpx\"\nI0622 16:21:30.354486 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-47l7c\"\nI0622 16:21:30.354496 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-5pnrz\"\nI0622 16:21:30.354533 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-6s5z9\"\nI0622 16:21:30.354546 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-89892\"\nI0622 16:21:30.354557 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-8v2mv\"\nI0622 16:21:30.354567 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-2lcpb\"\nI0622 16:21:30.354578 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-52mz4\"\nI0622 16:21:30.354613 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-9zfsq\"\nI0622 16:21:30.354627 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-5lxg5\"\nI0622 16:21:30.354638 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-8lq2l\"\nI0622 16:21:30.354648 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-4zmxr\"\nI0622 16:21:30.354657 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-6c8tw\"\nI0622 16:21:30.354668 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-6kcw4\"\nI0622 16:21:30.354702 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-8rrjv\"\nI0622 16:21:30.354734 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-4npr5\"\nI0622 16:21:30.354745 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-5vmgn\"\nI0622 16:21:30.354780 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-9mw6k\"\nI0622 16:21:30.354795 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-9sc5t\"\nI0622 16:21:30.354806 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-bhlf6\"\nI0622 16:21:30.354817 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-7wqxw\"\nI0622 16:21:30.354829 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-98r2w\"\nI0622 16:21:30.354865 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-9zpb7\"\nI0622 16:21:30.354879 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-b6pq6\"\nI0622 16:21:30.354890 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-2hlzp\"\nI0622 16:21:30.354900 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-4gjgv\"\nI0622 16:21:30.354915 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-58rkp\"\nI0622 16:21:30.354959 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-5v942\"\nI0622 16:21:30.354973 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-9nsx6\"\nI0622 16:21:30.354983 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-b4684\"\nI0622 16:21:30.354993 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-299hm\"\nI0622 16:21:30.355004 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-4bcrf\"\nI0622 16:21:30.355042 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-7krtw\"\nI0622 16:21:30.355057 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-8x2gm\"\nI0622 16:21:30.355068 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-bsm99\"\nI0622 16:21:30.355390 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:30.379024 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bsvhw\" portCount=0\nI0622 16:21:30.401535 11 proxier.go:1461] \"Reloading service iptables data\" numServices=154 numEndpoints=10 numFilterChains=4 numFilterRules=152 numNATChains=33 numNATRules=58\nI0622 16:21:30.409896 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.838147ms\"\nI0622 16:21:30.431226 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bvtdg\" portCount=0\nI0622 16:21:30.460059 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-bwtxp\" portCount=0\nI0622 16:21:30.477590 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-c46b6\" portCount=0\nI0622 16:21:30.495776 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-c5ntq\" portCount=0\nI0622 16:21:30.521388 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-cd8gq\" portCount=0\nI0622 16:21:30.538702 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ch4j8\" portCount=0\nI0622 16:21:30.559829 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-cprt2\" portCount=0\nI0622 16:21:30.583313 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-cqxj2\" portCount=0\nI0622 16:21:30.599162 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ct8fn\" portCount=0\nI0622 16:21:30.622237 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-cwv9v\" portCount=0\nI0622 16:21:30.634437 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-cxcw5\" portCount=0\nI0622 16:21:30.645224 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-d4jt8\" portCount=0\nI0622 16:21:30.656962 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-d8hkz\" portCount=0\nI0622 16:21:30.687063 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-d8lt8\" portCount=0\nI0622 16:21:30.703236 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-d99v7\" portCount=0\nI0622 16:21:30.715546 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-d9fxk\" portCount=0\nI0622 16:21:30.730778 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-dh5fs\" portCount=0\nI0622 16:21:30.750389 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-dr42j\" portCount=0\nI0622 16:21:30.777607 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-dvmph\" portCount=0\nI0622 16:21:30.824677 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-f2bkd\" portCount=0\nI0622 16:21:30.857956 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-f6rf2\" portCount=0\nI0622 16:21:30.883667 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-fhzwl\" portCount=0\nI0622 16:21:30.917902 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ftv94\" portCount=0\nI0622 16:21:30.959895 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-fv46k\" portCount=0\nI0622 16:21:30.982497 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-fv4tc\" portCount=0\nI0622 16:21:31.013843 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-fz266\" portCount=0\nI0622 16:21:31.027818 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-fz69j\" portCount=0\nI0622 16:21:31.041841 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-g4b7k\" portCount=0\nI0622 16:21:31.068088 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-g6b8k\" portCount=0\nI0622 16:21:31.085509 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-gbq87\" portCount=0\nI0622 16:21:31.114341 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ggb4d\" portCount=0\nI0622 16:21:31.134996 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-gl7cq\" portCount=0\nI0622 16:21:31.160342 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-gm6s9\" portCount=0\nI0622 16:21:31.178686 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-h678z\" portCount=0\nI0622 16:21:31.197562 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hbd52\" portCount=0\nI0622 16:21:31.219885 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hbnp5\" portCount=0\nI0622 16:21:31.240658 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hcs2s\" portCount=0\nI0622 16:21:31.250808 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hhbx8\" portCount=0\nI0622 16:21:31.270816 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hjcn9\" portCount=0\nI0622 16:21:31.287822 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hrgzv\" portCount=0\nI0622 16:21:31.319035 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-htpg7\" portCount=0\nI0622 16:21:31.359026 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hwphl\" portCount=0\nI0622 16:21:31.359102 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-d4jt8\"\nI0622 16:21:31.359117 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-fv46k\"\nI0622 16:21:31.359128 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-h678z\"\nI0622 16:21:31.359141 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-hhbx8\"\nI0622 16:21:31.359153 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-htpg7\"\nI0622 16:21:31.359164 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-cqxj2\"\nI0622 16:21:31.359175 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-cxcw5\"\nI0622 16:21:31.359185 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-ftv94\"\nI0622 16:21:31.359196 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-hbd52\"\nI0622 16:21:31.359207 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-hbnp5\"\nI0622 16:21:31.359216 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-bvtdg\"\nI0622 16:21:31.359225 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-cprt2\"\nI0622 16:21:31.359235 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-dh5fs\"\nI0622 16:21:31.359279 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-gbq87\"\nI0622 16:21:31.359291 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-gl7cq\"\nI0622 16:21:31.359301 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-gm6s9\"\nI0622 16:21:31.359311 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-hcs2s\"\nI0622 16:21:31.359322 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-c5ntq\"\nI0622 16:21:31.359335 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-d9fxk\"\nI0622 16:21:31.359346 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-f6rf2\"\nI0622 16:21:31.359357 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-fhzwl\"\nI0622 16:21:31.359368 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-fv4tc\"\nI0622 16:21:31.359378 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-hwphl\"\nI0622 16:21:31.359388 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-d8hkz\"\nI0622 16:21:31.359399 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-cwv9v\"\nI0622 16:21:31.359412 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-d8lt8\"\nI0622 16:21:31.359424 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-d99v7\"\nI0622 16:21:31.359435 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-f2bkd\"\nI0622 16:21:31.359621 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-g4b7k\"\nI0622 16:21:31.359686 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-hjcn9\"\nI0622 16:21:31.359722 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-ch4j8\"\nI0622 16:21:31.359755 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-hrgzv\"\nI0622 16:21:31.359799 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-dr42j\"\nI0622 16:21:31.359836 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-fz266\"\nI0622 16:21:31.359871 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-fz69j\"\nI0622 16:21:31.359905 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-g6b8k\"\nI0622 16:21:31.359939 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-ggb4d\"\nI0622 16:21:31.359983 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-bsvhw\"\nI0622 16:21:31.360033 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-c46b6\"\nI0622 16:21:31.360080 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-cd8gq\"\nI0622 16:21:31.360114 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-ct8fn\"\nI0622 16:21:31.360153 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-dvmph\"\nI0622 16:21:31.360197 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-bwtxp\"\nI0622 16:21:31.360333 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:31.381681 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-hx44q\" portCount=0\nI0622 16:21:31.401142 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-j7rxk\" portCount=0\nI0622 16:21:31.428218 11 proxier.go:1461] \"Reloading service iptables data\" numServices=111 numEndpoints=10 numFilterChains=4 numFilterRules=109 numNATChains=19 numNATRules=44\nI0622 16:21:31.435192 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-jljmz\" portCount=0\nI0622 16:21:31.438621 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"79.541814ms\"\nI0622 16:21:31.461764 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-jpn9j\" portCount=0\nI0622 16:21:31.510000 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-jvtg2\" portCount=0\nI0622 16:21:31.521492 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-jxgcg\" portCount=0\nI0622 16:21:31.547555 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-jzblk\" portCount=0\nI0622 16:21:31.571587 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-k4n8b\" portCount=0\nI0622 16:21:31.587347 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-k5ngj\" portCount=0\nI0622 16:21:31.608724 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-k7lcx\" portCount=0\nI0622 16:21:31.622016 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-kclls\" portCount=0\nI0622 16:21:31.635832 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-kcqwf\" portCount=0\nI0622 16:21:31.663429 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-khb2m\" portCount=0\nI0622 16:21:31.691364 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-kkn6w\" portCount=0\nI0622 16:21:31.715643 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-kmnbg\" portCount=0\nI0622 16:21:31.727440 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-l4cwq\" portCount=0\nI0622 16:21:31.740035 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-lcnqq\" portCount=0\nI0622 16:21:31.761855 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ldprv\" portCount=0\nI0622 16:21:31.786016 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-lj7l9\" portCount=0\nI0622 16:21:31.806642 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ljz6g\" portCount=0\nI0622 16:21:31.822382 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-lwrn5\" portCount=0\nI0622 16:21:31.831830 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-lzbj5\" portCount=0\nI0622 16:21:31.852131 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-m9jvv\" portCount=0\nI0622 16:21:31.868420 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-m9vcq\" portCount=0\nI0622 16:21:31.881655 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-md4fl\" portCount=0\nI0622 16:21:31.895001 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-mgr9x\" portCount=0\nI0622 16:21:31.906926 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-mjwx7\" portCount=0\nI0622 16:21:31.917721 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-mk5qp\" portCount=0\nI0622 16:21:31.933374 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-mqktd\" portCount=0\nI0622 16:21:31.948578 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-mrxnx\" portCount=0\nI0622 16:21:31.958302 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ms7rk\" portCount=0\nI0622 16:21:31.978800 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-n2sgb\" portCount=0\nI0622 16:21:31.993038 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ndfpx\" portCount=0\nI0622 16:21:32.007447 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nnxpn\" portCount=0\nI0622 16:21:32.016370 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nrqmr\" portCount=0\nI0622 16:21:32.025633 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nrtkb\" portCount=0\nI0622 16:21:32.033444 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nt9nr\" portCount=0\nI0622 16:21:32.043993 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nwj75\" portCount=0\nI0622 16:21:32.054589 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nwsbp\" portCount=0\nI0622 16:21:32.062879 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nx8rl\" portCount=0\nI0622 16:21:32.077018 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-nzjlk\" portCount=0\nI0622 16:21:32.118336 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-p64tg\" portCount=0\nI0622 16:21:32.131556 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pb4rj\" portCount=0\nI0622 16:21:32.147663 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pgd6r\" portCount=0\nI0622 16:21:32.160370 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pkhrb\" portCount=0\nI0622 16:21:32.173405 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pmfw6\" portCount=0\nI0622 16:21:32.184942 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pqtbj\" portCount=0\nI0622 16:21:32.200623 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-pskgv\" portCount=0\nI0622 16:21:32.222487 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-ptb47\" portCount=0\nI0622 16:21:32.238672 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-q8wgz\" portCount=0\nI0622 16:21:32.249636 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-qckb6\" portCount=0\nI0622 16:21:32.262584 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-qq6rq\" portCount=0\nI0622 16:21:32.306746 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-qqmj6\" portCount=0\nI0622 16:21:32.316926 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-r654r\" portCount=0\nI0622 16:21:32.328987 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-rhf9x\" portCount=0\nI0622 16:21:32.351163 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-rrfmn\" portCount=0\nI0622 16:21:32.351215 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-jzblk\"\nI0622 16:21:32.351229 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-ljz6g\"\nI0622 16:21:32.351291 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-ldprv\"\nI0622 16:21:32.351307 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-lwrn5\"\nI0622 16:21:32.351318 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-ms7rk\"\nI0622 16:21:32.351328 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-m9vcq\"\nI0622 16:21:32.351340 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-ndfpx\"\nI0622 16:21:32.351350 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-nt9nr\"\nI0622 16:21:32.351501 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-n2sgb\"\nI0622 16:21:32.351521 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-nwsbp\"\nI0622 16:21:32.351533 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-j7rxk\"\nI0622 16:21:32.351544 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-mgr9x\"\nI0622 16:21:32.351575 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-rrfmn\"\nI0622 16:21:32.351603 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-lj7l9\"\nI0622 16:21:32.351625 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-m9jvv\"\nI0622 16:21:32.351659 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-md4fl\"\nI0622 16:21:32.351672 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-nrqmr\"\nI0622 16:21:32.351683 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-nx8rl\"\nI0622 16:21:32.351696 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-jxgcg\"\nI0622 16:21:32.351716 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-kcqwf\"\nI0622 16:21:32.351747 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-mjwx7\"\nI0622 16:21:32.351766 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-jpn9j\"\nI0622 16:21:32.351783 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-pmfw6\"\nI0622 16:21:32.351795 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-qqmj6\"\nI0622 16:21:32.351826 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-jljmz\"\nI0622 16:21:32.351843 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-k5ngj\"\nI0622 16:21:32.351857 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-pkhrb\"\nI0622 16:21:32.351868 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-q8wgz\"\nI0622 16:21:32.351879 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-qq6rq\"\nI0622 16:21:32.351912 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-lcnqq\"\nI0622 16:21:32.351926 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-nwj75\"\nI0622 16:21:32.351937 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-nzjlk\"\nI0622 16:21:32.351948 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-pgd6r\"\nI0622 16:21:32.351959 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-k7lcx\"\nI0622 16:21:32.351969 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-kclls\"\nI0622 16:21:32.352046 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-khb2m\"\nI0622 16:21:32.352114 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-kmnbg\"\nI0622 16:21:32.352133 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-l4cwq\"\nI0622 16:21:32.352203 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-nrtkb\"\nI0622 16:21:32.352222 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-pb4rj\"\nI0622 16:21:32.352273 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-pqtbj\"\nI0622 16:21:32.352291 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-qckb6\"\nI0622 16:21:32.352301 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-rhf9x\"\nI0622 16:21:32.352311 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-jvtg2\"\nI0622 16:21:32.352338 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-nnxpn\"\nI0622 16:21:32.352349 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-mrxnx\"\nI0622 16:21:32.352360 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-ptb47\"\nI0622 16:21:32.352370 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-r654r\"\nI0622 16:21:32.352381 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-hx44q\"\nI0622 16:21:32.352395 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-k4n8b\"\nI0622 16:21:32.352405 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-pskgv\"\nI0622 16:21:32.352415 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-kkn6w\"\nI0622 16:21:32.352425 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-lzbj5\"\nI0622 16:21:32.352435 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-mk5qp\"\nI0622 16:21:32.352445 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-mqktd\"\nI0622 16:21:32.352455 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-p64tg\"\nI0622 16:21:32.352547 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:32.379495 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-rwmsj\" portCount=0\nI0622 16:21:32.402119 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-rwtzd\" portCount=0\nI0622 16:21:32.405003 11 proxier.go:1461] \"Reloading service iptables data\" numServices=55 numEndpoints=10 numFilterChains=4 numFilterRules=53 numNATChains=19 numNATRules=44\nI0622 16:21:32.413008 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.791645ms\"\nI0622 16:21:32.422098 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-rxlz4\" portCount=0\nI0622 16:21:32.446460 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-sbl5g\" portCount=0\nI0622 16:21:32.482573 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-scpzx\" portCount=0\nI0622 16:21:32.512100 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-shr59\" portCount=0\nI0622 16:21:32.529273 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-sj85d\" portCount=0\nI0622 16:21:32.545526 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-slggb\" portCount=0\nI0622 16:21:32.557730 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-swtbt\" portCount=0\nI0622 16:21:32.569191 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-t29jr\" portCount=0\nI0622 16:21:32.581106 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-t9h8t\" portCount=0\nI0622 16:21:32.604581 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-tdr9w\" portCount=0\nI0622 16:21:32.656796 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-tdrz2\" portCount=0\nI0622 16:21:32.670719 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-tqpq2\" portCount=0\nI0622 16:21:32.685133 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-tzf4s\" portCount=0\nI0622 16:21:32.709191 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-v5fq4\" portCount=0\nI0622 16:21:32.737692 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-v9hgz\" portCount=0\nI0622 16:21:32.758561 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vccdv\" portCount=0\nI0622 16:21:32.774809 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vs5tv\" portCount=0\nI0622 16:21:32.789130 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vsv4z\" portCount=0\nI0622 16:21:32.803437 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vvq8p\" portCount=0\nI0622 16:21:32.816531 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vxbkz\" portCount=0\nI0622 16:21:32.837240 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vxrqz\" portCount=0\nI0622 16:21:32.853161 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-vzzl9\" portCount=0\nI0622 16:21:32.866036 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-w756m\" portCount=0\nI0622 16:21:32.883614 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-w9m7q\" portCount=0\nI0622 16:21:32.904280 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-wmrl6\" portCount=0\nI0622 16:21:32.932642 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-wsdq7\" portCount=0\nI0622 16:21:32.943680 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-wvvll\" portCount=0\nI0622 16:21:32.957584 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-wzmzn\" portCount=0\nI0622 16:21:32.970963 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-x2lkw\" portCount=0\nI0622 16:21:32.982289 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-x6hs5\" portCount=0\nI0622 16:21:33.000754 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xb26c\" portCount=0\nI0622 16:21:33.029370 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xh7mb\" portCount=0\nI0622 16:21:33.052824 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xhpm7\" portCount=0\nI0622 16:21:33.064093 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xlglv\" portCount=0\nI0622 16:21:33.074176 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xpmpf\" portCount=0\nI0622 16:21:33.087465 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xsx5h\" portCount=0\nI0622 16:21:33.106676 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xtsvp\" portCount=0\nI0622 16:21:33.148409 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-xtvr7\" portCount=0\nI0622 16:21:33.174671 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-z2l2x\" portCount=0\nI0622 16:21:33.187006 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zhjxw\" portCount=0\nI0622 16:21:33.198858 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zln4x\" portCount=0\nI0622 16:21:33.209547 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zq2s9\" portCount=0\nI0622 16:21:33.219607 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zqzzn\" portCount=0\nI0622 16:21:33.229061 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zrp86\" portCount=0\nI0622 16:21:33.238340 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zthb5\" portCount=0\nI0622 16:21:33.285322 11 service.go:322] \"Service updated ports\" service=\"svc-latency-9242/latency-svc-zz88g\" portCount=0\nI0622 16:21:33.413630 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-scpzx\"\nI0622 16:21:33.413666 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-t29jr\"\nI0622 16:21:33.413678 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-t9h8t\"\nI0622 16:21:33.413692 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-vvq8p\"\nI0622 16:21:33.413703 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-zz88g\"\nI0622 16:21:33.413717 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-xsx5h\"\nI0622 16:21:33.413729 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-zq2s9\"\nI0622 16:21:33.413740 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-rwmsj\"\nI0622 16:21:33.413752 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-rwtzd\"\nI0622 16:21:33.413764 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-sbl5g\"\nI0622 16:21:33.413779 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-vxbkz\"\nI0622 16:21:33.413825 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-vxrqz\"\nI0622 16:21:33.413858 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-w9m7q\"\nI0622 16:21:33.413891 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-zrp86\"\nI0622 16:21:33.413937 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-swtbt\"\nI0622 16:21:33.413975 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-tzf4s\"\nI0622 16:21:33.414013 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-wmrl6\"\nI0622 16:21:33.414084 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-wzmzn\"\nI0622 16:21:33.414142 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-zln4x\"\nI0622 16:21:33.414179 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-zthb5\"\nI0622 16:21:33.414210 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-tqpq2\"\nI0622 16:21:33.414241 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-vccdv\"\nI0622 16:21:33.414270 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-vzzl9\"\nI0622 16:21:33.414307 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-xtsvp\"\nI0622 16:21:33.414341 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-shr59\"\nI0622 16:21:33.414374 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-vsv4z\"\nI0622 16:21:33.414423 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-xpmpf\"\nI0622 16:21:33.414457 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-xtvr7\"\nI0622 16:21:33.414499 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-z2l2x\"\nI0622 16:21:33.414534 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-zhjxw\"\nI0622 16:21:33.414569 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-xb26c\"\nI0622 16:21:33.414636 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-sj85d\"\nI0622 16:21:33.414677 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-slggb\"\nI0622 16:21:33.414711 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-tdrz2\"\nI0622 16:21:33.414743 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-v9hgz\"\nI0622 16:21:33.414776 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-w756m\"\nI0622 16:21:33.414810 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-x6hs5\"\nI0622 16:21:33.414843 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-wvvll\"\nI0622 16:21:33.414893 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-xhpm7\"\nI0622 16:21:33.414927 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-xlglv\"\nI0622 16:21:33.414958 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-xh7mb\"\nI0622 16:21:33.414991 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-zqzzn\"\nI0622 16:21:33.415022 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-rxlz4\"\nI0622 16:21:33.415072 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-tdr9w\"\nI0622 16:21:33.415108 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-v5fq4\"\nI0622 16:21:33.415141 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-vs5tv\"\nI0622 16:21:33.415173 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-wsdq7\"\nI0622 16:21:33.415205 11 service.go:462] \"Removing service port\" portName=\"svc-latency-9242/latency-svc-x2lkw\"\nI0622 16:21:33.415385 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:33.469480 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=44\nI0622 16:21:33.475722 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.121651ms\"\nI0622 16:21:34.476549 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:34.520054 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=41\nI0622 16:21:34.525438 11 service.go:322] \"Service updated ports\" service=\"conntrack-8394/svc-udp\" portCount=0\nI0622 16:21:34.532613 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.20344ms\"\nI0622 16:21:35.532786 11 service.go:462] \"Removing service port\" portName=\"conntrack-8394/svc-udp:udp\"\nI0622 16:21:35.532903 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:35.598734 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:21:35.613385 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"80.620504ms\"\nI0622 16:21:41.078558 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:41.123188 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:21:41.129578 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.130161ms\"\nI0622 16:21:41.266129 11 service.go:322] \"Service updated ports\" service=\"services-992/hairpin-test\" portCount=0\nI0622 16:21:41.266194 11 service.go:462] \"Removing service port\" portName=\"services-992/hairpin-test\"\nI0622 16:21:41.266289 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:41.307882 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=36\nI0622 16:21:41.314213 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.017057ms\"\nI0622 16:21:42.314468 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:42.363960 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:21:42.374320 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.975538ms\"\nI0622 16:21:44.479588 11 service.go:322] \"Service updated ports\" service=\"webhook-2096/e2e-test-webhook\" portCount=1\nI0622 16:21:44.479653 11 service.go:437] \"Adding new service port\" portName=\"webhook-2096/e2e-test-webhook\" servicePort=\"100.70.229.215:8443/TCP\"\nI0622 16:21:44.479738 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:44.518177 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=34\nI0622 16:21:44.523841 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.188927ms\"\nI0622 16:21:44.524168 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:44.562867 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:21:44.568860 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.977713ms\"\nI0622 16:21:45.945217 11 service.go:322] \"Service updated ports\" service=\"webhook-2096/e2e-test-webhook\" portCount=0\nI0622 16:21:45.945269 11 service.go:462] \"Removing service port\" portName=\"webhook-2096/e2e-test-webhook\"\nI0622 16:21:45.945363 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:45.988979 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=36\nI0622 16:21:45.994299 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.023705ms\"\nI0622 16:21:46.994615 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:47.034607 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:21:47.040253 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.769308ms\"\nI0622 16:21:47.436065 11 service.go:322] \"Service updated ports\" service=\"webhook-9014/e2e-test-webhook\" portCount=1\nI0622 16:21:48.040497 11 service.go:437] \"Adding new service port\" portName=\"webhook-9014/e2e-test-webhook\" servicePort=\"100.66.39.149:8443/TCP\"\nI0622 16:21:48.040647 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:48.078478 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:21:48.083900 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.43467ms\"\nI0622 16:21:59.421722 11 service.go:322] \"Service updated ports\" service=\"webhook-9014/e2e-test-webhook\" portCount=0\nI0622 16:21:59.421775 11 service.go:462] \"Removing service port\" portName=\"webhook-9014/e2e-test-webhook\"\nI0622 16:21:59.421870 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:59.461629 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=36\nI0622 16:21:59.467699 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.919662ms\"\nI0622 16:21:59.468155 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:21:59.508086 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:21:59.513910 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.162471ms\"\nI0622 16:22:13.032967 11 service.go:322] \"Service updated ports\" service=\"services-2698/test-service-vrpvh\" portCount=1\nI0622 16:22:13.033033 11 service.go:437] \"Adding new service port\" portName=\"services-2698/test-service-vrpvh:http\" servicePort=\"100.65.212.241:80/TCP\"\nI0622 16:22:13.033125 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:13.070897 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=34\nI0622 16:22:13.077433 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.407115ms\"\nI0622 16:22:13.171012 11 service.go:322] \"Service updated ports\" service=\"services-2698/test-service-vrpvh\" portCount=1\nI0622 16:22:13.171087 11 service.go:439] \"Updating existing service port\" portName=\"services-2698/test-service-vrpvh:http\" servicePort=\"100.65.212.241:80/TCP\"\nI0622 16:22:13.171445 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:13.212951 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0622 16:22:13.218217 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.135658ms\"\nI0622 16:22:13.414178 11 service.go:322] \"Service updated ports\" service=\"services-2698/test-service-vrpvh\" portCount=1\nI0622 16:22:13.511208 11 service.go:322] \"Service updated ports\" service=\"services-2698/test-service-vrpvh\" portCount=0\nI0622 16:22:14.218727 11 service.go:462] \"Removing service port\" portName=\"services-2698/test-service-vrpvh:http\"\nI0622 16:22:14.218848 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:14.268725 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:22:14.275436 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.727333ms\"\nI0622 16:22:21.667787 11 service.go:322] \"Service updated ports\" service=\"services-1523/svc-not-tolerate-unready\" portCount=0\nI0622 16:22:21.667957 11 service.go:462] \"Removing service port\" portName=\"services-1523/svc-not-tolerate-unready:http\"\nI0622 16:22:21.668057 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:21.705752 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:22:21.716749 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.791106ms\"\nI0622 16:22:21.716896 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:21.754725 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:22:21.760783 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.987497ms\"\nI0622 16:22:24.475963 11 service.go:322] \"Service updated ports\" service=\"crd-webhook-4716/e2e-test-crd-conversion-webhook\" portCount=1\nI0622 16:22:24.476017 11 service.go:437] \"Adding new service port\" portName=\"crd-webhook-4716/e2e-test-crd-conversion-webhook\" servicePort=\"100.71.175.129:9443/TCP\"\nI0622 16:22:24.476220 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:24.544553 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:22:24.552284 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"76.266692ms\"\nI0622 16:22:24.552751 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:24.604072 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:22:24.611956 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.591795ms\"\nI0622 16:22:28.861960 11 service.go:322] \"Service updated ports\" service=\"crd-webhook-4716/e2e-test-crd-conversion-webhook\" portCount=0\nI0622 16:22:28.862002 11 service.go:462] \"Removing service port\" portName=\"crd-webhook-4716/e2e-test-crd-conversion-webhook\"\nI0622 16:22:28.862096 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:28.905360 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 16:22:28.912880 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.875095ms\"\nI0622 16:22:28.913024 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:28.962604 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:22:28.968511 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.581446ms\"\nI0622 16:22:37.656308 11 service.go:322] \"Service updated ports\" service=\"webhook-268/e2e-test-webhook\" portCount=1\nI0622 16:22:37.656371 11 service.go:437] \"Adding new service port\" portName=\"webhook-268/e2e-test-webhook\" servicePort=\"100.65.30.131:8443/TCP\"\nI0622 16:22:37.656466 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:37.709370 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:22:37.718553 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.181577ms\"\nI0622 16:22:37.718708 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:37.757968 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:22:37.763444 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.84459ms\"\nI0622 16:22:39.219460 11 service.go:322] \"Service updated ports\" service=\"webhook-268/e2e-test-webhook\" portCount=0\nI0622 16:22:39.219515 11 service.go:462] \"Removing service port\" portName=\"webhook-268/e2e-test-webhook\"\nI0622 16:22:39.219617 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:39.272890 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 16:22:39.279974 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.456363ms\"\nI0622 16:22:40.280442 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:40.320111 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:22:40.325306 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.002321ms\"\nI0622 16:22:42.254244 11 service.go:322] \"Service updated ports\" service=\"webhook-5545/e2e-test-webhook\" portCount=1\nI0622 16:22:42.254302 11 service.go:437] \"Adding new service port\" portName=\"webhook-5545/e2e-test-webhook\" servicePort=\"100.64.75.62:8443/TCP\"\nI0622 16:22:42.254394 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:42.311084 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:22:42.319546 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.246882ms\"\nI0622 16:22:42.319692 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:42.408539 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:22:42.419226 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"99.633059ms\"\nI0622 16:22:44.001340 11 service.go:322] \"Service updated ports\" service=\"webhook-5545/e2e-test-webhook\" portCount=0\nI0622 16:22:44.001397 11 service.go:462] \"Removing service port\" portName=\"webhook-5545/e2e-test-webhook\"\nI0622 16:22:44.001491 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:44.039562 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 16:22:44.045181 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.779651ms\"\nI0622 16:22:45.045523 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:22:45.083042 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:22:45.088078 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.694848ms\"\nI0622 16:23:00.131813 11 service.go:322] \"Service updated ports\" service=\"kubectl-2187/agnhost-primary\" portCount=1\nI0622 16:23:00.131876 11 service.go:437] \"Adding new service port\" portName=\"kubectl-2187/agnhost-primary\" servicePort=\"100.69.244.222:6379/TCP\"\nI0622 16:23:00.131971 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:00.171102 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:23:00.176898 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.031911ms\"\nI0622 16:23:00.177030 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:00.213805 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:23:00.219110 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.17221ms\"\nI0622 16:23:05.814926 11 service.go:322] \"Service updated ports\" service=\"webhook-2468/e2e-test-webhook\" portCount=1\nI0622 16:23:05.814981 11 service.go:437] \"Adding new service port\" portName=\"webhook-2468/e2e-test-webhook\" servicePort=\"100.66.243.70:8443/TCP\"\nI0622 16:23:05.815075 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:05.877536 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:23:05.883075 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.094627ms\"\nI0622 16:23:05.883224 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:05.923042 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=39\nI0622 16:23:05.929219 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.103329ms\"\nI0622 16:23:06.372949 11 service.go:322] \"Service updated ports\" service=\"kubectl-2187/agnhost-primary\" portCount=0\nI0622 16:23:06.929445 11 service.go:462] \"Removing service port\" portName=\"kubectl-2187/agnhost-primary\"\nI0622 16:23:06.929587 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:06.969052 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:23:06.974593 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.188253ms\"\nI0622 16:23:07.195532 11 service.go:322] \"Service updated ports\" service=\"webhook-2468/e2e-test-webhook\" portCount=0\nI0622 16:23:07.975197 11 service.go:462] \"Removing service port\" portName=\"webhook-2468/e2e-test-webhook\"\nI0622 16:23:07.975432 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:08.021887 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 16:23:08.029435 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.253955ms\"\nI0622 16:23:14.025120 11 service.go:322] \"Service updated ports\" service=\"conntrack-7481/boom-server\" portCount=1\nI0622 16:23:14.025180 11 service.go:437] \"Adding new service port\" portName=\"conntrack-7481/boom-server\" servicePort=\"100.69.119.160:9000/TCP\"\nI0622 16:23:14.025264 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:14.072232 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:23:14.080059 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.867512ms\"\nI0622 16:23:14.080204 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:14.120424 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:23:14.129672 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.56758ms\"\nI0622 16:23:44.818718 11 service.go:322] \"Service updated ports\" service=\"deployment-8835/test-rolling-update-with-lb\" portCount=1\nI0622 16:23:44.818782 11 service.go:437] \"Adding new service port\" portName=\"deployment-8835/test-rolling-update-with-lb\" servicePort=\"100.69.11.57:80/TCP\"\nI0622 16:23:44.818873 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:44.856317 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=39\nI0622 16:23:44.862428 11 service_health.go:124] \"Opening healthcheck\" service=\"deployment-8835/test-rolling-update-with-lb\" port=31061\nI0622 16:23:44.862546 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.765936ms\"\nI0622 16:23:44.862808 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:44.903963 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0622 16:23:44.910049 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.439071ms\"\nI0622 16:23:51.643713 11 service.go:322] \"Service updated ports\" service=\"services-3052/nodeport-service\" portCount=1\nI0622 16:23:51.643772 11 service.go:437] \"Adding new service port\" portName=\"services-3052/nodeport-service\" servicePort=\"100.65.73.178:80/TCP\"\nI0622 16:23:51.643869 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:51.689555 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=56\nI0622 16:23:51.700090 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.316698ms\"\nI0622 16:23:51.700270 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:51.748547 11 service.go:322] \"Service updated ports\" service=\"services-3052/externalsvc\" portCount=1\nI0622 16:23:51.756646 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=56\nI0622 16:23:51.763614 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.476787ms\"\nI0622 16:23:52.763815 11 service.go:437] \"Adding new service port\" portName=\"services-3052/externalsvc\" servicePort=\"100.69.32.29:80/TCP\"\nI0622 16:23:52.763945 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:52.806073 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=23 numNATRules=56\nI0622 16:23:52.815588 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.815454ms\"\nI0622 16:23:56.550958 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:56.608270 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=61\nI0622 16:23:56.614664 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.835517ms\"\nI0622 16:23:58.302284 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:23:58.340857 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=26 numNATRules=64\nI0622 16:23:58.346798 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.791242ms\"\nI0622 16:24:01.061038 11 service.go:322] \"Service updated ports\" service=\"services-3052/nodeport-service\" portCount=0\nI0622 16:24:01.061090 11 service.go:462] \"Removing service port\" portName=\"services-3052/nodeport-service\"\nI0622 16:24:01.061196 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:01.100633 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=64\nI0622 16:24:01.107588 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.495239ms\"\nI0622 16:24:04.055005 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:04.115637 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=59\nI0622 16:24:04.124621 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.770002ms\"\nI0622 16:24:04.525947 11 service.go:322] \"Service updated ports\" service=\"kubectl-1003/agnhost-replica\" portCount=1\nI0622 16:24:04.526027 11 service.go:437] \"Adding new service port\" portName=\"kubectl-1003/agnhost-replica\" servicePort=\"100.66.37.100:6379/TCP\"\nI0622 16:24:04.526146 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:04.563488 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=56\nI0622 16:24:04.569269 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.252138ms\"\nI0622 16:24:04.897057 11 service.go:322] \"Service updated ports\" service=\"kubectl-1003/agnhost-primary\" portCount=1\nI0622 16:24:05.273297 11 service.go:322] \"Service updated ports\" service=\"kubectl-1003/frontend\" portCount=1\nI0622 16:24:05.273354 11 service.go:437] \"Adding new service port\" portName=\"kubectl-1003/agnhost-primary\" servicePort=\"100.71.182.136:6379/TCP\"\nI0622 16:24:05.273369 11 service.go:437] \"Adding new service port\" portName=\"kubectl-1003/frontend\" servicePort=\"100.64.27.192:80/TCP\"\nI0622 16:24:05.273475 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:05.331615 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=8 numNATChains=23 numNATRules=56\nI0622 16:24:05.339763 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.409878ms\"\nI0622 16:24:06.340530 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:06.380962 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=8 numNATChains=23 numNATRules=56\nI0622 16:24:06.388813 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.374161ms\"\nI0622 16:24:07.146042 11 service.go:322] \"Service updated ports\" service=\"services-3052/externalsvc\" portCount=0\nI0622 16:24:07.146095 11 service.go:462] \"Removing service port\" portName=\"services-3052/externalsvc\"\nI0622 16:24:07.146248 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:07.191446 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=61\nI0622 16:24:07.207603 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.500644ms\"\nI0622 16:24:08.208747 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:08.248334 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=27 numNATRules=66\nI0622 16:24:08.254621 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.008194ms\"\nI0622 16:24:09.254936 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:09.293648 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=28 numNATRules=69\nI0622 16:24:09.299814 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.035086ms\"\nI0622 16:24:10.300611 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:10.341611 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=5 numNATChains=29 numNATRules=72\nI0622 16:24:10.347690 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.239214ms\"\nI0622 16:24:11.732785 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:11.771515 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=16 numFilterChains=4 numFilterRules=5 numNATChains=30 numNATRules=75\nI0622 16:24:11.778131 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.502915ms\"\nI0622 16:24:12.709624 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:12.767989 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=17 numFilterChains=4 numFilterRules=4 numNATChains=32 numNATRules=80\nI0622 16:24:12.777659 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.160333ms\"\nI0622 16:24:16.743107 11 service.go:322] \"Service updated ports\" service=\"kubectl-1003/agnhost-replica\" portCount=0\nI0622 16:24:16.743193 11 service.go:462] \"Removing service port\" portName=\"kubectl-1003/agnhost-replica\"\nI0622 16:24:16.743448 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:16.785103 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=32 numNATRules=75\nI0622 16:24:16.791364 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.193647ms\"\nI0622 16:24:16.791702 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:16.832565 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=72\nI0622 16:24:16.838804 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.389788ms\"\nI0622 16:24:17.010809 11 service.go:322] \"Service updated ports\" service=\"kubectl-1003/agnhost-primary\" portCount=0\nI0622 16:24:17.264204 11 service.go:322] \"Service updated ports\" service=\"kubectl-1003/frontend\" portCount=0\nI0622 16:24:17.839069 11 service.go:462] \"Removing service port\" portName=\"kubectl-1003/agnhost-primary\"\nI0622 16:24:17.839106 11 service.go:462] \"Removing service port\" portName=\"kubectl-1003/frontend\"\nI0622 16:24:17.839293 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:17.879032 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=62\nI0622 16:24:17.891787 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.746152ms\"\nI0622 16:24:28.774231 11 service.go:322] \"Service updated ports\" service=\"deployment-8835/test-rolling-update-with-lb\" portCount=1\nI0622 16:24:28.774304 11 service.go:439] \"Updating existing service port\" portName=\"deployment-8835/test-rolling-update-with-lb\" servicePort=\"100.69.11.57:80/TCP\"\nI0622 16:24:28.774406 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:28.813901 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=57\nI0622 16:24:28.820267 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.962733ms\"\nI0622 16:24:30.859763 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:30.908769 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=61\nI0622 16:24:30.917807 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.197964ms\"\nI0622 16:24:30.988961 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:31.033468 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 16:24:31.041505 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.682423ms\"\nI0622 16:24:32.042627 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:32.081870 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0622 16:24:32.087440 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.995648ms\"\nI0622 16:24:35.812099 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:35.860440 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=54\nI0622 16:24:35.867827 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.831519ms\"\nI0622 16:24:35.972317 11 service.go:322] \"Service updated ports\" service=\"endpointslicemirroring-598/example-custom-endpoints\" portCount=1\nI0622 16:24:35.972376 11 service.go:437] \"Adding new service port\" portName=\"endpointslicemirroring-598/example-custom-endpoints:example\" servicePort=\"100.67.121.162:80/TCP\"\nI0622 16:24:35.972479 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:36.020661 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=52\nI0622 16:24:36.026196 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.83027ms\"\nI0622 16:24:36.054273 11 service.go:322] \"Service updated ports\" service=\"conntrack-7481/boom-server\" portCount=0\nI0622 16:24:37.026684 11 service.go:462] \"Removing service port\" portName=\"conntrack-7481/boom-server\"\nI0622 16:24:37.026839 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:37.094729 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=52\nI0622 16:24:37.101539 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"74.89238ms\"\nI0622 16:24:38.126930 11 service.go:322] \"Service updated ports\" service=\"webhook-9595/e2e-test-webhook\" portCount=1\nI0622 16:24:38.126991 11 service.go:437] \"Adding new service port\" portName=\"webhook-9595/e2e-test-webhook\" servicePort=\"100.71.151.94:8443/TCP\"\nI0622 16:24:38.127086 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:38.194971 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=52\nI0622 16:24:38.202301 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"75.310455ms\"\nI0622 16:24:39.203380 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:39.242030 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=57\nI0622 16:24:39.247898 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.821307ms\"\nI0622 16:24:41.542349 11 service.go:322] \"Service updated ports\" service=\"endpointslicemirroring-598/example-custom-endpoints\" portCount=0\nI0622 16:24:41.542402 11 service.go:462] \"Removing service port\" portName=\"endpointslicemirroring-598/example-custom-endpoints:example\"\nI0622 16:24:41.542512 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:41.594825 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=57\nI0622 16:24:41.606176 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.766002ms\"\nI0622 16:24:42.225884 11 service.go:322] \"Service updated ports\" service=\"webhook-9595/e2e-test-webhook\" portCount=0\nI0622 16:24:42.226392 11 service.go:462] \"Removing service port\" portName=\"webhook-9595/e2e-test-webhook\"\nI0622 16:24:42.226536 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:42.268360 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=54\nI0622 16:24:42.274313 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.943945ms\"\nI0622 16:24:42.934158 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:42.987274 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=55\nI0622 16:24:42.993837 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.827048ms\"\nI0622 16:24:43.995024 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:44.084717 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:24:44.092684 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"97.832793ms\"\nI0622 16:24:44.830812 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:44.868277 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=55\nI0622 16:24:44.874042 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.375221ms\"\nI0622 16:24:45.874466 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:45.932489 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:24:45.940083 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.766258ms\"\nI0622 16:24:48.614026 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:48.665843 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=56\nI0622 16:24:48.674791 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.847239ms\"\nI0622 16:24:48.674987 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:48.733599 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:24:48.740892 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.054146ms\"\nI0622 16:24:49.741324 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:24:49.779235 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=52\nI0622 16:24:49.785608 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.419595ms\"\nI0622 16:25:00.510148 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:00.547773 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=55\nI0622 16:25:00.553853 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.819275ms\"\nI0622 16:25:00.607614 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:00.653917 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:25:00.659526 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.058561ms\"\nI0622 16:25:01.659942 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:01.699331 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=52\nI0622 16:25:01.704841 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.071094ms\"\nI0622 16:25:03.004727 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:03.045312 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=55\nI0622 16:25:03.052006 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.435164ms\"\nI0622 16:25:04.052456 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:04.096366 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:25:04.102223 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.932843ms\"\nI0622 16:25:04.758587 11 service.go:322] \"Service updated ports\" service=\"services-656/nodeport-collision-1\" portCount=1\nI0622 16:25:04.758640 11 service.go:437] \"Adding new service port\" portName=\"services-656/nodeport-collision-1\" servicePort=\"100.68.212.255:80/TCP\"\nI0622 16:25:04.758747 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:04.839110 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=52\nI0622 16:25:04.847481 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"88.840755ms\"\nI0622 16:25:04.880018 11 service.go:322] \"Service updated ports\" service=\"services-656/nodeport-collision-1\" portCount=0\nI0622 16:25:04.958587 11 service.go:322] \"Service updated ports\" service=\"services-656/nodeport-collision-2\" portCount=1\nI0622 16:25:05.847695 11 service.go:462] \"Removing service port\" portName=\"services-656/nodeport-collision-1\"\nI0622 16:25:05.847847 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:05.908599 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=52\nI0622 16:25:05.917331 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.666242ms\"\nI0622 16:25:06.514485 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:06.561739 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=55\nI0622 16:25:06.569171 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.814845ms\"\nI0622 16:25:07.569655 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:07.625981 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:25:07.637901 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.543085ms\"\nI0622 16:25:18.815485 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:18.857267 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=56\nI0622 16:25:18.863094 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.773687ms\"\nI0622 16:25:18.905447 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:18.958211 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:25:18.964316 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.02295ms\"\nI0622 16:25:19.964613 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:20.001777 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=52\nI0622 16:25:20.007302 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.813505ms\"\nI0622 16:25:51.918300 11 service.go:322] \"Service updated ports\" service=\"webhook-4728/e2e-test-webhook\" portCount=1\nI0622 16:25:51.918356 11 service.go:437] \"Adding new service port\" portName=\"webhook-4728/e2e-test-webhook\" servicePort=\"100.65.196.214:8443/TCP\"\nI0622 16:25:51.918453 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:51.980108 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=52\nI0622 16:25:51.994857 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"76.503599ms\"\nI0622 16:25:51.995021 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:52.052927 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=57\nI0622 16:25:52.065394 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.488523ms\"\nI0622 16:25:55.714724 11 service.go:322] \"Service updated ports\" service=\"webhook-4728/e2e-test-webhook\" portCount=0\nI0622 16:25:55.714771 11 service.go:462] \"Removing service port\" portName=\"webhook-4728/e2e-test-webhook\"\nI0622 16:25:55.714879 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:55.772157 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=54\nI0622 16:25:55.778176 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.396132ms\"\nI0622 16:25:55.778329 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:55.827182 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=52\nI0622 16:25:55.846341 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.113853ms\"\nI0622 16:25:56.038165 11 service.go:322] \"Service updated ports\" service=\"conntrack-6783/svc-udp\" portCount=1\nI0622 16:25:56.846597 11 service.go:437] \"Adding new service port\" portName=\"conntrack-6783/svc-udp:udp\" servicePort=\"100.69.52.85:80/UDP\"\nI0622 16:25:56.846732 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:56.898747 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=52\nI0622 16:25:56.907397 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.831041ms\"\nI0622 16:25:59.178034 11 service.go:322] \"Service updated ports\" service=\"services-9488/svc-tolerate-unready\" portCount=1\nI0622 16:25:59.178090 11 service.go:437] \"Adding new service port\" portName=\"services-9488/svc-tolerate-unready:http\" servicePort=\"100.65.24.232:80/TCP\"\nI0622 16:25:59.178194 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:59.222379 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=21 numNATRules=52\nI0622 16:25:59.228387 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.297367ms\"\nI0622 16:25:59.228737 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:25:59.271178 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=21 numNATRules=52\nI0622 16:25:59.277733 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.297365ms\"\nI0622 16:26:02.017216 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:02.069542 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=60\nI0622 16:26:02.076215 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.126445ms\"\nI0622 16:26:08.199415 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:08.247733 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=60\nI0622 16:26:08.254453 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.138045ms\"\nI0622 16:26:08.254500 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:08.289120 11 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 16:26:08.291313 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.813727ms\"\nI0622 16:26:08.922765 11 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-6783/svc-udp:udp\" clusterIP=\"100.69.52.85\"\nI0622 16:26:08.922793 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:08.963284 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=65\nI0622 16:26:08.973286 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.697917ms\"\nI0622 16:26:09.362245 11 service.go:322] \"Service updated ports\" service=\"services-8087/nodeport-update-service\" portCount=1\nI0622 16:26:09.362314 11 service.go:437] \"Adding new service port\" portName=\"services-8087/nodeport-update-service\" servicePort=\"100.65.188.19:80/TCP\"\nI0622 16:26:09.362418 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:09.401321 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=65\nI0622 16:26:09.409308 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.996355ms\"\nI0622 16:26:09.454211 11 service.go:322] \"Service updated ports\" service=\"services-8087/nodeport-update-service\" portCount=1\nI0622 16:26:10.409510 11 service.go:437] \"Adding new service port\" portName=\"services-8087/nodeport-update-service:tcp-port\" servicePort=\"100.65.188.19:80/TCP\"\nI0622 16:26:10.409548 11 service.go:462] \"Removing service port\" portName=\"services-8087/nodeport-update-service\"\nI0622 16:26:10.409685 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:10.450624 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=26 numNATRules=65\nI0622 16:26:10.456973 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.501708ms\"\nI0622 16:26:11.747786 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:11.839621 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=26 numNATRules=65\nI0622 16:26:11.852734 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"105.08118ms\"\nI0622 16:26:20.955781 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:20.997063 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=73\nI0622 16:26:21.004473 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.802329ms\"\nI0622 16:26:23.343014 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:23.400420 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=76\nI0622 16:26:23.411501 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.623725ms\"\nI0622 16:26:25.454977 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:25.496492 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=74\nI0622 16:26:25.504888 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.042314ms\"\nI0622 16:26:25.505079 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:25.567887 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=73\nI0622 16:26:25.576416 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.484789ms\"\nI0622 16:26:26.536392 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:26.576043 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=29 numNATRules=63\nI0622 16:26:26.582736 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.462955ms\"\nI0622 16:26:27.583585 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:27.625484 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=24 numNATRules=58\nI0622 16:26:27.632330 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.847621ms\"\nI0622 16:26:28.633233 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:28.675532 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=7 numNATChains=25 numNATRules=61\nI0622 16:26:28.684539 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.522982ms\"\nI0622 16:26:29.685830 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:29.740868 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=25 numNATRules=59\nI0622 16:26:29.754883 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.218211ms\"\nI0622 16:26:43.210591 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:43.249922 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=24 numNATRules=58\nI0622 16:26:43.256317 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.817236ms\"\nI0622 16:26:44.522949 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:44.561618 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=8 numNATChains=24 numNATRules=55\nI0622 16:26:44.571971 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.12957ms\"\nI0622 16:26:44.582923 11 service.go:322] \"Service updated ports\" service=\"conntrack-6783/svc-udp\" portCount=0\nI0622 16:26:44.583002 11 service.go:462] \"Removing service port\" portName=\"conntrack-6783/svc-udp:udp\"\nI0622 16:26:44.583404 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:44.621978 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=22 numNATRules=53\nI0622 16:26:44.632503 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.49948ms\"\nI0622 16:26:45.632891 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:45.671216 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=22 numNATRules=53\nI0622 16:26:45.677018 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.361445ms\"\nI0622 16:26:47.934053 11 service.go:322] \"Service updated ports\" service=\"services-8087/nodeport-update-service\" portCount=2\nI0622 16:26:47.934183 11 service.go:437] \"Adding new service port\" portName=\"services-8087/nodeport-update-service:udp-port\" servicePort=\"100.65.188.19:80/UDP\"\nI0622 16:26:47.934201 11 service.go:439] \"Updating existing service port\" portName=\"services-8087/nodeport-update-service:tcp-port\" servicePort=\"100.65.188.19:80/TCP\"\nI0622 16:26:47.934305 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:47.973448 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=9 numNATChains=22 numNATRules=53\nI0622 16:26:47.979446 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.276067ms\"\nI0622 16:26:47.979844 11 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"services-8087/nodeport-update-service:udp-port\" clusterIP=\"100.65.188.19\"\nI0622 16:26:47.979934 11 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"services-8087/nodeport-update-service:udp-port\" nodePort=32295\nI0622 16:26:47.979944 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:48.017691 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=7 numNATChains=26 numNATRules=64\nI0622 16:26:48.030961 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.472641ms\"\nI0622 16:26:49.520751 11 service.go:322] \"Service updated ports\" service=\"services-1925/service-proxy-toggled\" portCount=1\nI0622 16:26:49.520810 11 service.go:437] \"Adding new service port\" portName=\"services-1925/service-proxy-toggled\" servicePort=\"100.71.87.162:80/TCP\"\nI0622 16:26:49.521020 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:49.568016 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=8 numNATChains=26 numNATRules=64\nI0622 16:26:49.574081 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.275331ms\"\nI0622 16:26:50.265157 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:50.318868 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=8 numNATChains=26 numNATRules=64\nI0622 16:26:50.326865 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.81279ms\"\nI0622 16:26:50.326915 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:50.368234 11 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 16:26:50.370895 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.977409ms\"\nI0622 16:26:50.574257 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:50.613702 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=8 numNATChains=26 numNATRules=64\nI0622 16:26:50.620754 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.595803ms\"\nI0622 16:26:51.156741 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:51.200616 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=13 numFilterChains=4 numFilterRules=7 numNATChains=28 numNATRules=69\nI0622 16:26:51.207506 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.877594ms\"\nI0622 16:26:52.208778 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:52.248632 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=14 numFilterChains=4 numFilterRules=7 numNATChains=29 numNATRules=72\nI0622 16:26:52.255133 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.640851ms\"\nI0622 16:26:54.882982 11 service.go:322] \"Service updated ports\" service=\"webhook-9825/e2e-test-webhook\" portCount=1\nI0622 16:26:54.883040 11 service.go:437] \"Adding new service port\" portName=\"webhook-9825/e2e-test-webhook\" servicePort=\"100.70.230.1:8443/TCP\"\nI0622 16:26:54.883175 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:54.934448 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=14 numFilterChains=4 numFilterRules=8 numNATChains=29 numNATRules=72\nI0622 16:26:54.943770 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.732816ms\"\nI0622 16:26:54.943924 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:54.996327 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=15 numFilterChains=4 numFilterRules=7 numNATChains=31 numNATRules=77\nI0622 16:26:55.004604 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.791465ms\"\nI0622 16:26:56.005492 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:56.060829 11 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=16 numFilterChains=4 numFilterRules=7 numNATChains=32 numNATRules=80\nI0622 16:26:56.070841 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.523902ms\"\nI0622 16:26:56.639644 11 service.go:322] \"Service updated ports\" service=\"webhook-9825/e2e-test-webhook\" portCount=0\nI0622 16:26:57.073510 11 service.go:462] \"Removing service port\" portName=\"webhook-9825/e2e-test-webhook\"\nI0622 16:26:57.073676 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:26:57.136494 11 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=7 numNATChains=32 numNATRules=77\nI0622 16:26:57.144899 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.424723ms\"\nI0622 16:27:00.267228 11 service.go:322] \"Service updated ports\" service=\"deployment-8835/test-rolling-update-with-lb\" portCount=0\nI0622 16:27:00.267300 11 service.go:462] \"Removing service port\" portName=\"deployment-8835/test-rolling-update-with-lb\"\nI0622 16:27:00.267407 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:00.307239 11 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=3 numNATChains=30 numNATRules=75\nI0622 16:27:00.313343 11 service_health.go:107] \"Closing healthcheck\" service=\"deployment-8835/test-rolling-update-with-lb\" port=31061\nI0622 16:27:00.313647 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.347809ms\"\nE0622 16:27:00.313739 11 service_health.go:187] \"Healthcheck closed\" err=\"accept tcp [::]:31061: use of closed network connection\" service=\"deployment-8835/test-rolling-update-with-lb\"\nI0622 16:27:20.670523 11 service.go:322] \"Service updated ports\" service=\"services-1925/service-proxy-toggled\" portCount=0\nI0622 16:27:20.670581 11 service.go:462] \"Removing service port\" portName=\"services-1925/service-proxy-toggled\"\nI0622 16:27:20.670686 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:20.725074 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=30 numNATRules=68\nI0622 16:27:20.744324 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"73.736479ms\"\nI0622 16:27:20.744493 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:20.839915 11 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=64\nI0622 16:27:20.858882 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"114.509543ms\"\nI0622 16:27:22.362970 11 service.go:322] \"Service updated ports\" service=\"services-9488/svc-tolerate-unready\" portCount=0\nI0622 16:27:22.363032 11 service.go:462] \"Removing service port\" portName=\"services-9488/svc-tolerate-unready:http\"\nI0622 16:27:22.363141 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:22.400586 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=59\nI0622 16:27:22.406802 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.766249ms\"\nI0622 16:27:23.056949 11 service.go:322] \"Service updated ports\" service=\"services-8087/nodeport-update-service\" portCount=0\nI0622 16:27:23.057009 11 service.go:462] \"Removing service port\" portName=\"services-8087/nodeport-update-service:tcp-port\"\nI0622 16:27:23.057021 11 service.go:462] \"Removing service port\" portName=\"services-8087/nodeport-update-service:udp-port\"\nI0622 16:27:23.057136 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:23.100295 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=42\nI0622 16:27:23.112340 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.32791ms\"\nI0622 16:27:24.112568 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:24.151435 11 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:27:24.156829 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.399474ms\"\nI0622 16:27:25.111946 11 service.go:322] \"Service updated ports\" service=\"webhook-3523/e2e-test-webhook\" portCount=1\nI0622 16:27:25.112003 11 service.go:437] \"Adding new service port\" portName=\"webhook-3523/e2e-test-webhook\" servicePort=\"100.70.31.14:8443/TCP\"\nI0622 16:27:25.112101 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:25.152617 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:27:25.157982 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.984516ms\"\nI0622 16:27:25.524083 11 service.go:322] \"Service updated ports\" service=\"services-1925/service-proxy-toggled\" portCount=1\nI0622 16:27:26.158228 11 service.go:437] \"Adding new service port\" portName=\"services-1925/service-proxy-toggled\" servicePort=\"100.71.87.162:80/TCP\"\nI0622 16:27:26.158454 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:26.195181 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:27:26.200659 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.470993ms\"\nI0622 16:27:26.548419 11 service.go:322] \"Service updated ports\" service=\"webhook-3523/e2e-test-webhook\" portCount=0\nI0622 16:27:27.200873 11 service.go:462] \"Removing service port\" portName=\"webhook-3523/e2e-test-webhook\"\nI0622 16:27:27.201036 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:27.244472 11 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=47\nI0622 16:27:27.249791 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.958653ms\"\nI0622 16:27:42.666782 11 service.go:322] \"Service updated ports\" service=\"sctp-313/sctp-endpoint-test\" portCount=1\nI0622 16:27:42.666835 11 service.go:437] \"Adding new service port\" portName=\"sctp-313/sctp-endpoint-test\" servicePort=\"100.65.149.156:5060/SCTP\"\nI0622 16:27:42.666931 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:42.707931 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 16:27:42.717896 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.064639ms\"\nI0622 16:27:42.718083 11 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:27:42.775330 11 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 16:27:42.789381 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.430168ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-nodes-us-west4-a-r4pg ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-nodes-us-west4-a-z5t6 ====\n2022/06/22 16:11:37 Running command:\nCommand env: (log-file=/var/log/kube-proxy.log, also-stdout=true, redirect-stderr=true)\nRun from directory: \nExecutable path: /usr/local/bin/kube-proxy\nArgs (comma-delimited): /usr/local/bin/kube-proxy,--cluster-cidr=100.96.0.0/11,--conntrack-max-per-core=131072,--hostname-override=nodes-us-west4-a-z5t6,--kubeconfig=/var/lib/kube-proxy/kubeconfig,--master=https://api.internal.e2e-e2e-kops-gce-stable.k8s.local,--oom-score-adj=-998,--v=2\n2022/06/22 16:11:37 Now listening for interrupts\nI0622 16:11:37.514384 10 flags.go:64] FLAG: --add-dir-header=\"false\"\nI0622 16:11:37.514562 10 flags.go:64] FLAG: --alsologtostderr=\"false\"\nI0622 16:11:37.514637 10 flags.go:64] FLAG: --bind-address=\"0.0.0.0\"\nI0622 16:11:37.514673 10 flags.go:64] FLAG: --bind-address-hard-fail=\"false\"\nI0622 16:11:37.514694 10 flags.go:64] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0622 16:11:37.514715 10 flags.go:64] FLAG: --cleanup=\"false\"\nI0622 16:11:37.514734 10 flags.go:64] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0622 16:11:37.514754 10 flags.go:64] FLAG: --config=\"\"\nI0622 16:11:37.514778 10 flags.go:64] FLAG: --config-sync-period=\"15m0s\"\nI0622 16:11:37.514799 10 flags.go:64] FLAG: --conntrack-max-per-core=\"131072\"\nI0622 16:11:37.514820 10 flags.go:64] FLAG: --conntrack-min=\"131072\"\nI0622 16:11:37.514840 10 flags.go:64] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0622 16:11:37.514860 10 flags.go:64] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0622 16:11:37.514879 10 flags.go:64] FLAG: --detect-local-mode=\"\"\nI0622 16:11:37.514920 10 flags.go:64] FLAG: --feature-gates=\"\"\nI0622 16:11:37.514945 10 flags.go:64] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0622 16:11:37.514966 10 flags.go:64] FLAG: --healthz-port=\"10256\"\nI0622 16:11:37.515019 10 flags.go:64] FLAG: --help=\"false\"\nI0622 16:11:37.515043 10 flags.go:64] FLAG: --hostname-override=\"nodes-us-west4-a-z5t6\"\nI0622 16:11:37.515064 10 flags.go:64] FLAG: --iptables-masquerade-bit=\"14\"\nI0622 16:11:37.515083 10 flags.go:64] FLAG: --iptables-min-sync-period=\"1s\"\nI0622 16:11:37.515107 10 flags.go:64] FLAG: --iptables-sync-period=\"30s\"\nI0622 16:11:37.515128 10 flags.go:64] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0622 16:11:37.515489 10 flags.go:64] FLAG: --ipvs-min-sync-period=\"0s\"\nI0622 16:11:37.515946 10 flags.go:64] FLAG: --ipvs-scheduler=\"\"\nI0622 16:11:37.516424 10 flags.go:64] FLAG: --ipvs-strict-arp=\"false\"\nI0622 16:11:37.516433 10 flags.go:64] FLAG: --ipvs-sync-period=\"30s\"\nI0622 16:11:37.516439 10 flags.go:64] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0622 16:11:37.516444 10 flags.go:64] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0622 16:11:37.516448 10 flags.go:64] FLAG: --ipvs-udp-timeout=\"0s\"\nI0622 16:11:37.516453 10 flags.go:64] FLAG: --kube-api-burst=\"10\"\nI0622 16:11:37.516458 10 flags.go:64] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0622 16:11:37.516464 10 flags.go:64] FLAG: --kube-api-qps=\"5\"\nI0622 16:11:37.516482 10 flags.go:64] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0622 16:11:37.516488 10 flags.go:64] FLAG: --log-backtrace-at=\":0\"\nI0622 16:11:37.516499 10 flags.go:64] FLAG: --log-dir=\"\"\nI0622 16:11:37.516506 10 flags.go:64] FLAG: --log-file=\"\"\nI0622 16:11:37.516512 10 flags.go:64] FLAG: --log-file-max-size=\"1800\"\nI0622 16:11:37.516518 10 flags.go:64] FLAG: --log-flush-frequency=\"5s\"\nI0622 16:11:37.516524 10 flags.go:64] FLAG: --logtostderr=\"true\"\nI0622 16:11:37.516530 10 flags.go:64] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0622 16:11:37.516536 10 flags.go:64] FLAG: --masquerade-all=\"false\"\nI0622 16:11:37.516541 10 flags.go:64] FLAG: --master=\"https://api.internal.e2e-e2e-kops-gce-stable.k8s.local\"\nI0622 16:11:37.516547 10 flags.go:64] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0622 16:11:37.516552 10 flags.go:64] FLAG: --metrics-port=\"10249\"\nI0622 16:11:37.516559 10 flags.go:64] FLAG: --nodeport-addresses=\"[]\"\nI0622 16:11:37.516567 10 flags.go:64] FLAG: --one-output=\"false\"\nI0622 16:11:37.516573 10 flags.go:64] FLAG: --oom-score-adj=\"-998\"\nI0622 16:11:37.516579 10 flags.go:64] FLAG: --pod-bridge-interface=\"\"\nI0622 16:11:37.516584 10 flags.go:64] FLAG: --pod-interface-name-prefix=\"\"\nI0622 16:11:37.516589 10 flags.go:64] FLAG: --profiling=\"false\"\nI0622 16:11:37.516594 10 flags.go:64] FLAG: --proxy-mode=\"\"\nI0622 16:11:37.516609 10 flags.go:64] FLAG: --proxy-port-range=\"\"\nI0622 16:11:37.516616 10 flags.go:64] FLAG: --show-hidden-metrics-for-version=\"\"\nI0622 16:11:37.516621 10 flags.go:64] FLAG: --skip-headers=\"false\"\nI0622 16:11:37.516625 10 flags.go:64] FLAG: --skip-log-headers=\"false\"\nI0622 16:11:37.516630 10 flags.go:64] FLAG: --stderrthreshold=\"2\"\nI0622 16:11:37.516635 10 flags.go:64] FLAG: --udp-timeout=\"250ms\"\nI0622 16:11:37.516640 10 flags.go:64] FLAG: --v=\"2\"\nI0622 16:11:37.516645 10 flags.go:64] FLAG: --version=\"false\"\nI0622 16:11:37.516652 10 flags.go:64] FLAG: --vmodule=\"\"\nI0622 16:11:37.516658 10 flags.go:64] FLAG: --write-config-to=\"\"\nI0622 16:11:37.516680 10 server.go:231] \"Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP\"\nI0622 16:11:37.516760 10 feature_gate.go:245] feature gates: &{map[]}\nI0622 16:11:37.516889 10 feature_gate.go:245] feature gates: &{map[]}\nE0622 16:11:37.795334 10 node.go:152] Failed to retrieve node info: Get \"https://api.internal.e2e-e2e-kops-gce-stable.k8s.local/api/v1/nodes/nodes-us-west4-a-z5t6\": dial tcp: lookup api.internal.e2e-e2e-kops-gce-stable.k8s.local on 169.254.169.254:53: no such host\nE0622 16:11:38.856563 10 node.go:152] Failed to retrieve node info: Get \"https://api.internal.e2e-e2e-kops-gce-stable.k8s.local/api/v1/nodes/nodes-us-west4-a-z5t6\": dial tcp: lookup api.internal.e2e-e2e-kops-gce-stable.k8s.local on 169.254.169.254:53: no such host\nE0622 16:11:41.221423 10 node.go:152] Failed to retrieve node info: Get \"https://api.internal.e2e-e2e-kops-gce-stable.k8s.local/api/v1/nodes/nodes-us-west4-a-z5t6\": dial tcp: lookup api.internal.e2e-e2e-kops-gce-stable.k8s.local on 169.254.169.254:53: no such host\nI0622 16:11:45.836369 10 node.go:163] Successfully retrieved node IP: 10.0.16.3\nI0622 16:11:45.836406 10 server_others.go:138] \"Detected node IP\" address=\"10.0.16.3\"\nI0622 16:11:45.836467 10 server_others.go:578] \"Unknown proxy mode, assuming iptables proxy\" proxyMode=\"\"\nI0622 16:11:45.836606 10 server_others.go:175] \"DetectLocalMode\" LocalMode=\"ClusterCIDR\"\nI0622 16:11:45.878001 10 server_others.go:206] \"Using iptables Proxier\"\nI0622 16:11:45.878044 10 server_others.go:213] \"kube-proxy running in dual-stack mode\" ipFamily=IPv4\nI0622 16:11:45.878057 10 server_others.go:214] \"Creating dualStackProxier for iptables\"\nI0622 16:11:45.878073 10 server_others.go:501] \"Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\"\nI0622 16:11:45.878106 10 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0622 16:11:45.878200 10 utils.go:431] \"Changed sysctl\" name=\"net/ipv4/conf/all/route_localnet\" before=0 after=1\nI0622 16:11:45.878245 10 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0622 16:11:45.878274 10 proxier.go:319] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0622 16:11:45.878309 10 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv4\nI0622 16:11:45.878317 10 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0622 16:11:45.878373 10 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0622 16:11:45.878388 10 proxier.go:319] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0622 16:11:45.878398 10 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv6\nI0622 16:11:45.878508 10 server.go:661] \"Version info\" version=\"v1.25.0-alpha.1\"\nI0622 16:11:45.878518 10 server.go:663] \"Golang settings\" GOGC=\"\" GOMAXPROCS=\"\" GOTRACEBACK=\"\"\nI0622 16:11:45.880529 10 conntrack.go:52] \"Setting nf_conntrack_max\" nf_conntrack_max=262144\nI0622 16:11:45.881888 10 conntrack.go:100] \"Set sysctl\" entry=\"net/netfilter/nf_conntrack_tcp_timeout_close_wait\" value=3600\nI0622 16:11:45.883950 10 config.go:317] \"Starting service config controller\"\nI0622 16:11:45.884026 10 shared_informer.go:255] Waiting for caches to sync for service config\nI0622 16:11:45.884105 10 config.go:226] \"Starting endpoint slice config controller\"\nI0622 16:11:45.884153 10 shared_informer.go:255] Waiting for caches to sync for endpoint slice config\nI0622 16:11:45.886818 10 config.go:444] \"Starting node config controller\"\nI0622 16:11:45.886831 10 shared_informer.go:255] Waiting for caches to sync for node config\nI0622 16:11:45.888042 10 service.go:322] \"Service updated ports\" service=\"default/kubernetes\" portCount=1\nI0622 16:11:45.888079 10 service.go:322] \"Service updated ports\" service=\"kube-system/kube-dns\" portCount=3\nI0622 16:11:45.888487 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 16:11:45.888505 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 16:11:45.985095 10 shared_informer.go:262] Caches are synced for endpoint slice config\nI0622 16:11:45.985170 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 16:11:45.985199 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0622 16:11:45.985095 10 shared_informer.go:262] Caches are synced for service config\nI0622 16:11:45.985272 10 service.go:437] \"Adding new service port\" portName=\"default/kubernetes:https\" servicePort=\"100.64.0.1:443/TCP\"\nI0622 16:11:45.985299 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns\" servicePort=\"100.64.0.10:53/UDP\"\nI0622 16:11:45.985313 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns-tcp\" servicePort=\"100.64.0.10:53/TCP\"\nI0622 16:11:45.985331 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:metrics\" servicePort=\"100.64.0.10:9153/TCP\"\nI0622 16:11:45.985396 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:11:45.986871 10 shared_informer.go:262] Caches are synced for node config\nI0622 16:11:46.037483 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 16:11:46.057252 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"72.019872ms\"\nI0622 16:11:46.057296 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:11:46.147302 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 16:11:46.150143 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"92.845112ms\"\nI0622 16:11:48.803655 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:11:48.840118 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 16:11:48.844349 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.711271ms\"\nI0622 16:11:48.844393 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:11:48.870409 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 16:11:48.872479 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.086664ms\"\nI0622 16:12:02.182275 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:12:02.214546 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=4 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 16:12:02.218561 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.358958ms\"\nI0622 16:12:02.506958 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:12:02.537512 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0622 16:12:02.541426 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.574402ms\"\nI0622 16:12:03.189677 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI0622 16:12:03.189711 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:12:03.219655 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=12 numNATRules=25\nI0622 16:12:03.227048 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.541861ms\"\nI0622 16:12:04.228279 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:12:04.257760 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:12:04.267630 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.504365ms\"\nI0622 16:15:33.372609 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:33.403282 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:15:33.407189 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.631379ms\"\nI0622 16:15:39.484161 10 service.go:322] \"Service updated ports\" service=\"pods-5379/fooservice\" portCount=1\nI0622 16:15:39.484221 10 service.go:437] \"Adding new service port\" portName=\"pods-5379/fooservice\" servicePort=\"100.68.229.199:8765/TCP\"\nI0622 16:15:39.484246 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:39.523788 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:15:39.528835 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.620775ms\"\nI0622 16:15:39.528923 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:39.571069 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:15:39.578441 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.566466ms\"\nI0622 16:15:41.011023 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:41.049056 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:15:41.054323 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.363535ms\"\nI0622 16:15:42.054954 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:42.088107 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:15:42.092402 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.554445ms\"\nI0622 16:15:42.722019 10 service.go:322] \"Service updated ports\" service=\"conntrack-6270/svc-udp\" portCount=1\nI0622 16:15:42.722066 10 service.go:437] \"Adding new service port\" portName=\"conntrack-6270/svc-udp:udp\" servicePort=\"100.64.200.158:80/UDP\"\nI0622 16:15:42.722115 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:42.754721 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:15:42.759983 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.925725ms\"\nI0622 16:15:43.760243 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:43.797406 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=39\nI0622 16:15:43.802379 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.241347ms\"\nI0622 16:15:45.605930 10 service.go:322] \"Service updated ports\" service=\"services-1794/service-headless-toggled\" portCount=1\nI0622 16:15:45.605983 10 service.go:437] \"Adding new service port\" portName=\"services-1794/service-headless-toggled\" servicePort=\"100.67.119.37:80/TCP\"\nI0622 16:15:45.606012 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:45.638658 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=39\nI0622 16:15:45.643512 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.533378ms\"\nI0622 16:15:45.643563 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:45.679386 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=39\nI0622 16:15:45.684905 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.360766ms\"\nI0622 16:15:48.726426 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-6270/svc-udp:udp\" clusterIP=\"100.64.200.158\"\nI0622 16:15:48.726505 10 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-6270/svc-udp:udp\" nodePort=32018\nI0622 16:15:48.726540 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:48.765587 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 16:15:48.777521 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.223015ms\"\nI0622 16:15:49.118337 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:49.184410 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=44\nI0622 16:15:49.192059 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"73.859467ms\"\nI0622 16:15:49.321474 10 service.go:322] \"Service updated ports\" service=\"pods-5379/fooservice\" portCount=0\nI0622 16:15:49.356874 10 service.go:462] \"Removing service port\" portName=\"pods-5379/fooservice\"\nI0622 16:15:49.356922 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:49.391206 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=18 numNATRules=42\nI0622 16:15:49.396508 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.652668ms\"\nI0622 16:15:49.396546 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:49.425996 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0622 16:15:49.428249 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.702973ms\"\nI0622 16:15:50.195369 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:50.259510 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:15:50.265396 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.112308ms\"\nI0622 16:15:56.523297 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:56.554870 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=53\nI0622 16:15:56.560094 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.855146ms\"\nI0622 16:15:56.754803 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:56.790131 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=51\nI0622 16:15:56.803790 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.044837ms\"\nI0622 16:15:57.804119 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:15:57.836635 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:15:57.842031 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.077238ms\"\nI0622 16:16:00.291266 10 service.go:322] \"Service updated ports\" service=\"apply-4693/test-svc\" portCount=1\nI0622 16:16:00.291433 10 service.go:437] \"Adding new service port\" portName=\"apply-4693/test-svc\" servicePort=\"100.65.152.156:8080/UDP\"\nI0622 16:16:00.291474 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:00.337741 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=50\nI0622 16:16:00.342931 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.52072ms\"\nI0622 16:16:01.706836 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:01.761519 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:16:01.767766 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.997873ms\"\nI0622 16:16:03.015536 10 service.go:322] \"Service updated ports\" service=\"dns-5264/test-service-2\" portCount=1\nI0622 16:16:03.015586 10 service.go:437] \"Adding new service port\" portName=\"dns-5264/test-service-2:http\" servicePort=\"100.70.81.69:80/TCP\"\nI0622 16:16:03.015617 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:03.052720 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 16:16:03.057459 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.875502ms\"\nI0622 16:16:03.057512 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:03.091970 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 16:16:03.096994 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.501459ms\"\nI0622 16:16:05.588452 10 service.go:322] \"Service updated ports\" service=\"apply-4693/test-svc\" portCount=0\nI0622 16:16:05.588497 10 service.go:462] \"Removing service port\" portName=\"apply-4693/test-svc\"\nI0622 16:16:05.588526 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:05.619759 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:16:05.627797 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.298445ms\"\nI0622 16:16:12.237420 10 service.go:322] \"Service updated ports\" service=\"conntrack-6270/svc-udp\" portCount=0\nI0622 16:16:12.237456 10 service.go:462] \"Removing service port\" portName=\"conntrack-6270/svc-udp:udp\"\nI0622 16:16:12.237505 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:12.269684 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=48\nI0622 16:16:12.277848 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.391888ms\"\nI0622 16:16:12.277918 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:12.311033 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 16:16:12.315641 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.76067ms\"\nI0622 16:16:13.315876 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:13.346932 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:16:13.351627 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.826666ms\"\nI0622 16:16:22.800891 10 service.go:322] \"Service updated ports\" service=\"services-1794/service-headless-toggled\" portCount=0\nI0622 16:16:22.800936 10 service.go:462] \"Removing service port\" portName=\"services-1794/service-headless-toggled\"\nI0622 16:16:22.800970 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:22.833456 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=43\nI0622 16:16:22.839009 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.071514ms\"\nI0622 16:16:29.702791 10 service.go:322] \"Service updated ports\" service=\"services-1794/service-headless-toggled\" portCount=1\nI0622 16:16:29.702849 10 service.go:437] \"Adding new service port\" portName=\"services-1794/service-headless-toggled\" servicePort=\"100.67.119.37:80/TCP\"\nI0622 16:16:29.702881 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:29.754035 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:16:29.759611 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.769218ms\"\nI0622 16:16:40.658829 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:40.702218 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=47\nI0622 16:16:40.708695 10 service.go:322] \"Service updated ports\" service=\"dns-5264/test-service-2\" portCount=0\nI0622 16:16:40.709167 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.396578ms\"\nI0622 16:16:40.709203 10 service.go:462] \"Removing service port\" portName=\"dns-5264/test-service-2:http\"\nI0622 16:16:40.709250 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:40.756483 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=45\nI0622 16:16:40.762037 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.834835ms\"\nI0622 16:16:41.762251 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:41.795919 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=45\nI0622 16:16:41.800763 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.563489ms\"\nI0622 16:16:48.158364 10 service.go:322] \"Service updated ports\" service=\"services-1076/externalname-service\" portCount=1\nI0622 16:16:48.158420 10 service.go:437] \"Adding new service port\" portName=\"services-1076/externalname-service:http\" servicePort=\"100.70.187.19:80/TCP\"\nI0622 16:16:48.158457 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:48.190250 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=45\nI0622 16:16:48.194927 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.513097ms\"\nI0622 16:16:49.291721 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:49.327152 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=50\nI0622 16:16:49.331959 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.313402ms\"\nI0622 16:16:51.762585 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:51.799102 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=53\nI0622 16:16:51.804235 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.732081ms\"\nI0622 16:16:56.675646 10 service.go:322] \"Service updated ports\" service=\"dns-7433/dns-test-service-3\" portCount=1\nI0622 16:16:56.675699 10 service.go:437] \"Adding new service port\" portName=\"dns-7433/dns-test-service-3:http\" servicePort=\"100.68.250.227:80/TCP\"\nI0622 16:16:56.675958 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:56.709972 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=53\nI0622 16:16:56.714701 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.009917ms\"\nI0622 16:16:56.802093 10 service.go:322] \"Service updated ports\" service=\"services-8585/tolerate-unready\" portCount=1\nI0622 16:16:56.802171 10 service.go:437] \"Adding new service port\" portName=\"services-8585/tolerate-unready:http\" servicePort=\"100.69.135.109:80/TCP\"\nI0622 16:16:56.802246 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:56.839271 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 16:16:56.844667 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.528432ms\"\nI0622 16:16:57.844901 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:57.879756 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=53\nI0622 16:16:57.885191 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.330606ms\"\nI0622 16:16:58.422808 10 service.go:322] \"Service updated ports\" service=\"services-1794/service-headless-toggled\" portCount=0\nI0622 16:16:58.886342 10 service.go:462] \"Removing service port\" portName=\"services-1794/service-headless-toggled\"\nI0622 16:16:58.886469 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:16:58.920846 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=46\nI0622 16:16:58.931343 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.024608ms\"\nI0622 16:17:01.543297 10 service.go:322] \"Service updated ports\" service=\"services-1069/clusterip-service\" portCount=1\nI0622 16:17:01.543346 10 service.go:437] \"Adding new service port\" portName=\"services-1069/clusterip-service\" servicePort=\"100.67.88.217:80/TCP\"\nI0622 16:17:01.543374 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:01.600767 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=18 numNATRules=42\nI0622 16:17:01.607171 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.82807ms\"\nI0622 16:17:01.607239 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:01.631073 10 service.go:322] \"Service updated ports\" service=\"services-1069/externalsvc\" portCount=1\nI0622 16:17:01.648413 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=18 numNATRules=42\nI0622 16:17:01.654781 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.573806ms\"\nI0622 16:17:02.341263 10 service.go:322] \"Service updated ports\" service=\"services-1076/externalname-service\" portCount=0\nI0622 16:17:02.655384 10 service.go:437] \"Adding new service port\" portName=\"services-1069/externalsvc\" servicePort=\"100.68.82.182:80/TCP\"\nI0622 16:17:02.655438 10 service.go:462] \"Removing service port\" portName=\"services-1076/externalname-service:http\"\nI0622 16:17:02.655605 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:02.708719 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=18 numNATRules=37\nI0622 16:17:02.729294 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"73.99542ms\"\nI0622 16:17:03.730013 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:03.762331 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=39\nI0622 16:17:03.767302 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.353051ms\"\nI0622 16:17:05.818932 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:05.851260 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=44\nI0622 16:17:05.856377 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.492217ms\"\nI0622 16:17:06.818305 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:06.852794 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=47\nI0622 16:17:06.857417 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.179845ms\"\nI0622 16:17:07.861867 10 service.go:322] \"Service updated ports\" service=\"services-1069/clusterip-service\" portCount=0\nI0622 16:17:07.861933 10 service.go:462] \"Removing service port\" portName=\"services-1069/clusterip-service\"\nI0622 16:17:07.861967 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:07.913356 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 16:17:07.920680 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.745015ms\"\nI0622 16:17:14.236986 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:14.291659 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 16:17:14.297821 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.962339ms\"\nI0622 16:17:16.826116 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:16.870545 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=42\nI0622 16:17:16.875622 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.566188ms\"\nI0622 16:17:17.215984 10 service.go:322] \"Service updated ports\" service=\"services-1178/endpoint-test2\" portCount=1\nI0622 16:17:17.216035 10 service.go:437] \"Adding new service port\" portName=\"services-1178/endpoint-test2\" servicePort=\"100.70.128.117:80/TCP\"\nI0622 16:17:17.216073 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:17.253442 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=39\nI0622 16:17:17.258900 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.872663ms\"\nI0622 16:17:17.569791 10 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-int-port\" portCount=1\nI0622 16:17:17.630998 10 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-named-port\" portCount=1\nI0622 16:17:17.683943 10 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-no-match\" portCount=1\nI0622 16:17:18.259743 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-5318/example-int-port:example\" servicePort=\"100.67.57.248:80/TCP\"\nI0622 16:17:18.259784 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-5318/example-named-port:http\" servicePort=\"100.71.216.157:80/TCP\"\nI0622 16:17:18.260032 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-5318/example-no-match:example-no-match\" servicePort=\"100.66.17.13:80/TCP\"\nI0622 16:17:18.260273 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:18.293100 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=10 numFilterChains=4 numFilterRules=9 numNATChains=17 numNATRules=39\nI0622 16:17:18.297755 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.087837ms\"\nI0622 16:17:19.127694 10 service.go:322] \"Service updated ports\" service=\"dns-7433/dns-test-service-3\" portCount=0\nI0622 16:17:19.127742 10 service.go:462] \"Removing service port\" portName=\"dns-7433/dns-test-service-3:http\"\nI0622 16:17:19.127793 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:19.157925 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=10 numFilterChains=4 numFilterRules=8 numNATChains=17 numNATRules=39\nI0622 16:17:19.162823 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.08479ms\"\nI0622 16:17:20.163082 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:20.203958 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=10 numFilterChains=4 numFilterRules=9 numNATChains=17 numNATRules=36\nI0622 16:17:20.208223 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.212755ms\"\nI0622 16:17:20.888768 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:20.922594 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=11 numFilterChains=4 numFilterRules=8 numNATChains=17 numNATRules=39\nI0622 16:17:20.927532 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.83111ms\"\nI0622 16:17:21.928580 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:21.994316 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=7 numNATChains=19 numNATRules=44\nI0622 16:17:22.001494 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"73.002022ms\"\nI0622 16:17:23.002702 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:23.037062 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=49\nI0622 16:17:23.042585 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.012273ms\"\nI0622 16:17:23.548259 10 service.go:322] \"Service updated ports\" service=\"services-8585/tolerate-unready\" portCount=0\nI0622 16:17:24.042810 10 service.go:462] \"Removing service port\" portName=\"services-8585/tolerate-unready:http\"\nI0622 16:17:24.043090 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:24.075931 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=46\nI0622 16:17:24.080676 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.887708ms\"\nI0622 16:17:27.013775 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:27.045987 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=44\nI0622 16:17:27.050710 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.172782ms\"\nI0622 16:17:27.210303 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:27.253305 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=44\nI0622 16:17:27.258014 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.757296ms\"\nI0622 16:17:27.329865 10 service.go:322] \"Service updated ports\" service=\"services-1069/externalsvc\" portCount=0\nI0622 16:17:28.258188 10 service.go:462] \"Removing service port\" portName=\"services-1069/externalsvc\"\nI0622 16:17:28.258276 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:28.305993 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=44\nI0622 16:17:28.312517 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.351294ms\"\nI0622 16:17:29.312891 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:29.346509 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=52\nI0622 16:17:29.351304 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.604472ms\"\nI0622 16:17:30.470652 10 service.go:322] \"Service updated ports\" service=\"webhook-2919/e2e-test-webhook\" portCount=1\nI0622 16:17:30.470738 10 service.go:437] \"Adding new service port\" portName=\"webhook-2919/e2e-test-webhook\" servicePort=\"100.66.137.215:8443/TCP\"\nI0622 16:17:30.470775 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:30.506533 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=52\nI0622 16:17:30.512301 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.569093ms\"\nI0622 16:17:31.512518 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:31.551929 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=57\nI0622 16:17:31.556708 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.275787ms\"\nI0622 16:17:35.590739 10 service.go:322] \"Service updated ports\" service=\"kubectl-9993/agnhost-primary\" portCount=1\nI0622 16:17:35.590918 10 service.go:437] \"Adding new service port\" portName=\"kubectl-9993/agnhost-primary\" servicePort=\"100.66.220.73:6379/TCP\"\nI0622 16:17:35.590957 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:35.643058 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=57\nI0622 16:17:35.647508 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.602051ms\"\nI0622 16:17:35.647559 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:35.692601 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=57\nI0622 16:17:35.697693 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.140251ms\"\nI0622 16:17:37.003720 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:37.059706 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=60\nI0622 16:17:37.067547 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.902117ms\"\nI0622 16:17:40.371395 10 service.go:322] \"Service updated ports\" service=\"webhook-2919/e2e-test-webhook\" portCount=0\nI0622 16:17:40.371440 10 service.go:462] \"Removing service port\" portName=\"webhook-2919/e2e-test-webhook\"\nI0622 16:17:40.371472 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:40.403620 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=57\nI0622 16:17:40.408047 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.60545ms\"\nI0622 16:17:40.416818 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:40.449844 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=55\nI0622 16:17:40.455631 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.863429ms\"\nI0622 16:17:41.273063 10 service.go:322] \"Service updated ports\" service=\"crd-webhook-6908/e2e-test-crd-conversion-webhook\" portCount=1\nI0622 16:17:41.455937 10 service.go:437] \"Adding new service port\" portName=\"crd-webhook-6908/e2e-test-crd-conversion-webhook\" servicePort=\"100.66.97.249:9443/TCP\"\nI0622 16:17:41.456039 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:41.490636 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=58\nI0622 16:17:41.495905 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.036749ms\"\nI0622 16:17:43.144313 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:43.180718 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=24 numNATRules=54\nI0622 16:17:43.185395 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.176059ms\"\nI0622 16:17:43.332232 10 service.go:322] \"Service updated ports\" service=\"services-1178/endpoint-test2\" portCount=0\nI0622 16:17:44.156154 10 service.go:462] \"Removing service port\" portName=\"services-1178/endpoint-test2\"\nI0622 16:17:44.156366 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:44.194128 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=57\nI0622 16:17:44.202065 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.921065ms\"\nI0622 16:17:45.202213 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:45.257095 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=57\nI0622 16:17:45.269002 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.836281ms\"\nI0622 16:17:45.946374 10 service.go:322] \"Service updated ports\" service=\"crd-webhook-6908/e2e-test-crd-conversion-webhook\" portCount=0\nI0622 16:17:45.946413 10 service.go:462] \"Removing service port\" portName=\"crd-webhook-6908/e2e-test-crd-conversion-webhook\"\nI0622 16:17:45.946444 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:45.978203 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=54\nI0622 16:17:45.983622 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.20862ms\"\nI0622 16:17:46.984667 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:47.028862 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=52\nI0622 16:17:47.034320 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.750088ms\"\nI0622 16:17:48.158013 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:48.192533 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=50\nI0622 16:17:48.197664 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.70948ms\"\nI0622 16:17:49.179321 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:49.212528 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=50\nI0622 16:17:49.218137 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.876378ms\"\nI0622 16:17:50.218838 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:50.278738 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=52\nI0622 16:17:50.284298 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.56474ms\"\nI0622 16:17:52.126694 10 service.go:322] \"Service updated ports\" service=\"kubectl-9993/agnhost-primary\" portCount=0\nI0622 16:17:52.126735 10 service.go:462] \"Removing service port\" portName=\"kubectl-9993/agnhost-primary\"\nI0622 16:17:52.126767 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:52.186663 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=49\nI0622 16:17:52.192723 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.983672ms\"\nI0622 16:17:52.195196 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:17:52.232099 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 16:17:52.243988 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.822199ms\"\nI0622 16:18:03.516312 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:18:03.550290 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 16:18:03.555598 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.374108ms\"\nI0622 16:18:03.555790 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:18:03.585700 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=47\nI0622 16:18:03.590715 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.085719ms\"\nI0622 16:18:03.774322 10 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-int-port\" portCount=0\nI0622 16:18:03.787711 10 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-named-port\" portCount=0\nI0622 16:18:03.811192 10 service.go:322] \"Service updated ports\" service=\"endpointslice-5318/example-no-match\" portCount=0\nI0622 16:18:04.590914 10 service.go:462] \"Removing service port\" portName=\"endpointslice-5318/example-no-match:example-no-match\"\nI0622 16:18:04.590961 10 service.go:462] \"Removing service port\" portName=\"endpointslice-5318/example-int-port:example\"\nI0622 16:18:04.590996 10 service.go:462] \"Removing service port\" portName=\"endpointslice-5318/example-named-port:http\"\nI0622 16:18:04.591257 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:18:04.627554 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=39\nI0622 16:18:04.631596 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.707048ms\"\nI0622 16:18:45.591859 10 service.go:322] \"Service updated ports\" service=\"webhook-7868/e2e-test-webhook\" portCount=1\nI0622 16:18:45.591917 10 service.go:437] \"Adding new service port\" portName=\"webhook-7868/e2e-test-webhook\" servicePort=\"100.68.35.10:8443/TCP\"\nI0622 16:18:45.591949 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:18:45.622582 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0622 16:18:45.627065 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.153533ms\"\nI0622 16:18:45.627150 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:18:45.660700 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0622 16:18:45.665611 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.513406ms\"\nI0622 16:18:59.965793 10 service.go:322] \"Service updated ports\" service=\"webhook-7868/e2e-test-webhook\" portCount=0\nI0622 16:18:59.965834 10 service.go:462] \"Removing service port\" portName=\"webhook-7868/e2e-test-webhook\"\nI0622 16:18:59.966025 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:18:59.997480 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0622 16:19:00.001713 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.874739ms\"\nI0622 16:19:00.001800 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:00.034655 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:19:00.039410 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.661522ms\"\nI0622 16:19:34.503214 10 service.go:322] \"Service updated ports\" service=\"services-1711/nodeport-reuse\" portCount=1\nI0622 16:19:34.503268 10 service.go:437] \"Adding new service port\" portName=\"services-1711/nodeport-reuse\" servicePort=\"100.71.159.219:80/TCP\"\nI0622 16:19:34.503606 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:34.535436 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:19:34.540910 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.634596ms\"\nI0622 16:19:34.541037 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:34.555700 10 service.go:322] \"Service updated ports\" service=\"services-1711/nodeport-reuse\" portCount=0\nI0622 16:19:34.572110 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:19:34.577016 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.077177ms\"\nI0622 16:19:35.577909 10 service.go:462] \"Removing service port\" portName=\"services-1711/nodeport-reuse\"\nI0622 16:19:35.577997 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:35.620475 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0622 16:19:35.625155 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.267399ms\"\nI0622 16:19:43.337257 10 service.go:322] \"Service updated ports\" service=\"services-1711/nodeport-reuse\" portCount=1\nI0622 16:19:43.337330 10 service.go:437] \"Adding new service port\" portName=\"services-1711/nodeport-reuse\" servicePort=\"100.67.196.227:80/TCP\"\nI0622 16:19:43.337393 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:43.368738 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=34\nI0622 16:19:43.373540 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.21546ms\"\nI0622 16:19:43.373596 10 proxier.go:853] \"Syncing iptables rules\"\nI0622 16:19:43.398908 10 service.go:322] \"Service updated ports\" service=\"services-1711/nodeport-reuse\" portCount=0\nI0622 16:19:43.410816 10 proxier.go:1461] \"Reloading service iptables data\" n